Uploaded by Arif Mohammed

01-Oracle-DBA-Interview-Questions

******************************************************************************
******************************************************************************
**
`
INITIALIZATION PARAMETERS
******************************************************************************
******************************************************************************
**
PARAMETERS
1.ASM_POWER_LIMIT:-Power Range to rebalance data transfer In ASM
2.SESSIONS:-SESSIONS specifies the maximum number of sessions that can be created in the
system. Because every login requires a session,
this parameter effectively determines the maximum number of concurrent users in the
system.
3.DB_BLOCK_CHECKING
4.DB_CREATE_ONLINE_LOG_DEST_n
5.DB_FILE_MULTIBLOCK_READ_COUNT
6.DB_FILE_NAME_CONVERT:-DB_FILE_NAME_CONVERT is useful for creating a duplicate
database for recovery purposes. It converts the filename of a new datafile on the primary
database to a filename on the standby database. If you add a datafile to the primary database,
1
you must add a corresponding file to the standby database. When the standby database is
updated, this parameter converts the datafile name on the primary database to the datafile
name on the standby database. The file on the standby database must exist and be writable, or
the recovery process will halt with an error
7.LOG_FILE_NAME_CONVERT:-converts the filename of a new log file on the primary database
to the filename of a log file on the standby database. If you add a log file to the primary
database, you must add a corresponding file to the standby database.
8.LOG_ARCHIVE_DEST_n:-For multiplexing the Archive log files set the destination of the file.
(Range 1-31).
The LOG_ARCHIVE_DEST_n initialization parameter defines up to 31 (where n = 1, 2,
3, ... 10) destinations,
each of which must specify either the LOCATION or the SERVICE attribute to specify
where to archive the redo data.
All other attributes are optional. Note that whether you are specifying the
LOCATION attribute or the SERVICE attribute,
it must be the first attribute supplied in the list of attributes.
9.SHARED_SERVERS:-specifies the number of server processes that you want to create when an
instance is started. If system load decreases, then this minimum number of servers is
maintained. Therefore, you should take care not to set SHARED_SERVERS too high at system
startup.
11.NLS_LANGUAGE :-NLS_LANGUAGE specifies the default language of the database. This
language is used for messages, day and month names,
symbols for AD, BC, a.m., and p.m., and the default sorting mechanism
2
The ALTER SYSTEM statement without the DEFERRED keyword modifies the global value of the
parameter for all sessions in the instance, for the duration of the instance (until the database is
shut down). The value of the following initialization parameters can be changed with ALTER
SYSTEM:
15.ASM_DISKGROUPS:-the diskgroups involved in the instance to be mounted
16.ASM_DISKSTRING:-the list of disk involved in the diskgroups
17.ASM_PREFERRED_READ_FAILURE_GROUPS
18.BACKGROUND_DUMP_DEST:-destination for the bdump file
19.CONTROL_FILES:-Every database has a control file, which contains entries that describe the
structure of the database (such as its name,
the timestamp of its creation, and the names and locations of its datafiles and redo
files). CONTROL_FILES specifies one or
more names of control files, separated by commas
20.CORE_DUMP_DEST
3
21.DB_nK_CACHE_SIZE:-Cache size with the block size of nK.
22.DB_BLOCK_SIZE:-specifies (in bytes) the size of Oracle database blocks. Typical values are
4096 and 8192. The value of this parameter
must be a multiple of the physical block size at the device level.
The value for DB_BLOCK_SIZE in effect at the time you create the database determines
the size of the blocks. The value
must remain set to its initial value.
NON-MODIFIABLE.
23.DB_NAME:-specifies a database identifier of up to 8 characters. This parameter must be
specified and must correspond to the name specified
in the CREATE DATABASE statement
24.DB_BLOCK_CHECKING:-DB_BLOCK_CHECKING specifies whether or not Oracle performs
block checking for database blocks. values:-yes Or No
25.DB_BLOCK_CHECKSUM:-Checksums are verified when a block is read - only if this parameter
is TYPICAL or FULL and the last write of the block stored a checksum. In FULL mode, Oracle also
verifies the checksum before a change application from update/delete statements and
recomputes it after the change is applied. In addition, Oracle gives every log block a checksum
before writing it to the current log.
26.DB_CACHE_SIZE:-Database buffer cache size
4
27.DB_CREATE_FILE_DEST:-specifies the default location for the flash recovery area. The flash
recovery area contains multiplexed copies
of current control files and online redo logs, as well as archived redo logs, flashback
logs, and RMAN backups.
28.DB_RECOVERY_FILE_DEST_SIZE:-specifies (in bytes) the hard limit on the total space to be
used by target database recovery files created in the flash recovery area.
29.DB_CREATE_ONLINE_LOG_DEST_n:- specifies the default location for Oracle-managed
control files and online redo logs. If more than one DB_CREATE_ONLINE_LOG_DEST_n
parameter is specified, then the control file or online redo log is multiplexed across the
locations of the other DB_CREATE_ONLINE_LOG_DEST_n parameters. One member of each
online redo log is created in each location, and one control file is created in each location
30.DB_FLASHBACK_RETENTION_TARGET:-DB_FLASHBACK_RETENTION_TARGET specifies the
upper limit (in minutes) on how far back in time the database may be flashed back. How far
back one can flashback a database depends on how much flashback data Oracle has kept in the
flash recovery area.
31.DB_KEEP_CACHE_SIZE:-Set keep pool cache size
5
33.DB_RECOVERY_FILE_DEST:-Destination for FRA to store the Backups, Backupsets, Archival
logs backup, control file & spfile backup under SID directory.
34.DB_RECOVERY_FILE_DEST_SIZE:-Size of the directory in which your
backups=backupsets,backup-piece,archives will be stored.
35.DIAGNOSTIC_DEST:-Destination where your DIAG directory will be generated i.e
(trace,audit,alert_log files will be stored).
36.DISPATCHERS:-No. of dispatchers.
37.FAL_SERVER:-specifies the FAL (fetch archive log) server for a standby database.
The value is an Oracle Net service name, which is assumed to be configured properly on the
standby database system to point to the desired FAL server.
38.FAL_CLIENT:-specifies the FAL (fetch archive log) client name that is used by the FAL service,
configured through the FAL_SERVER parameter, to refer to the FAL client. The value is an Oracle
Net service name, which is assumed to be configured properly on the FAL server system to
point to the FAL client (standby database). Given the dependency of FAL_CLIENT on
FAL_SERVER, the two parameters should be configured or changed at the same time.
39.FAST_START_MTTR_TARGET:-enables you to specify the number of seconds the database
6
takes to perform crash recovery of a single instance. When specified,
FAST_START_MTTR_TARGET is overridden by LOG_CHECKPOINT_INTERVAL.
40.JAVA_POOL_SIZE:-Size of java pool
41.LARGE_POOL_SIZE:-Large pool size
42.LOCAL_LISTENER:-specifies a network name that resolves to an address or address list of
Oracle Net local listeners (that is, listeners that are running on the same machine as this
instance). The address or address list is specified in the TNSNAMES.ORA file or other address
repository as configured for your system.
43.LOG_ARCHIVE_CONFIG:-enables or disables the sending of redo logs to remote destinations
and the receipt of remote redo logs, and specifies the unique database names
(DB_UNIQUE_NAME) for each database in the Data Guard configuration.
44.LOG_ARCHIVE_DEST:-Destination of archived log files when there is no multiplexing of these
files.
45.MAX_DISPATCHERS:-specifies the maximum number of dispatcher processes allowed to be
running simultaneously. It can be overridden by the DISPATCHERS parameter and is maintained
for backward compatibility with older releases
46.MAX_DUMP_FILE_SIZE:_MAX_DUMP_FILE_SIZE specifies the maximum size of trace files
(excluding the alert file). Change this limit if you are concerned that trace files may use too
7
much space.
*
A numerical value for MAX_DUMP_FILE_SIZE specifies the maximum size in operating
system blocks.
*
A number followed by a K or M suffix specifies the file size in kilobytes or megabytes.
*
The special value string UNLIMITED means that there is no upper limit on trace file size.
Thus, dump files can be as large as the operating system permits.
47.MAX_SHARED_SERVERS
48.MEMORY_TARGET:-Specifies the amount of memory available to the database. The
database automatically tunes the used memory to memory_target, allocating and releasing SGA
and PGA memory as needed.
49.MEMORY_MAX_TARGET:-Specifies the maximum value to which the DBA can increase the
memory_target parameter
8
50.OPEN_CURSORS:-OPEN_CURSORS specifies the maximum number of open cursors (handles
to private SQL areas) a session can have at once.
You can use this parameter to prevent a session from opening an excessive number of
cursors
51.PGA_AGGREGATE_TARGET:-Total size of memory assigned to all the PGA of users which will
be connected.
52.:-
53.RESOURCE_LIMIT
54.RESULT_CACHE_MAX_RESULT:-Specifies the percentage of result_cache_max_size that a
single result set can use
55.RESULT_CACHE_MAX_SIZE:-Sets the maximum amount of SGA memory that can be utilized
by the Result Cache. Setting this to 0 disables the Result Cache.
56.SGA_TARGET:-Total size of the SYSTEM GLOBAL AREA
57.SHARED_POOL_SIZE:Size of Shared Pool
9
58.SHARED_SERVER_SESSIONS
59.STANDBY_ARCHIVE_DEST:-Destination of the standby database where the archived redo
logfiles will get stored.
STANDBY_ARCHIVE_DEST can be used to specify where archived logs received from
a primary database are stored on a standby database. It is no
longer
necessary to set this parameter, because an appropriate location is automatically chosen.
60.STANDBY_FILE_MANAGEMENT
61.UNDO_RETENTION:-This parameter is set to the value in seconds which ensures that much
time the undo data will be guaranteed stored in UNDO tbspace and we can recover from that
by Flashback.
62.UNDO_TABLESPACE:-UNDO_TABLESPACE specifies the undo tablespace to be used when an
instance starts up. If this parameter is specified when the
instance is in manual undo management mode, then an error will occur and startup
will fail.
63.USER_DUMP_DEST
The ALTER SYSTEM ... DEFERRED statement does not modify the global value of the parameter
for existing sessions, but the value will be modified for future sessions that connect to the
database. The value of the following initialization parameters can be changed with ALTER
SYSTEM ... DEFERRED:
10
64.AUDIT_FILE_DEST
Displaying Current Parameter Values
To see the current settings for initialization parameters, use the following SQL*Plus command:
SQL> SHOW PARAMETERS
This command displays all parameters in alphabetical order, along with their current values.
Enter the following text string to display all parameters having BLOCK in their names:
SQL> SHOW PARAMETERS BLOCK
You can use the SPOOL command to write the output to a file.
Parameters You Should Not Specify in the Parameter File
You should not specify the following two types of parameters in your parameter files:
* Parameters that you never alter except when instructed to do so by Oracle to resolve a
problem
* Derived parameters, which normally do not need altering because their values are
11
calculated automatically by the Oracle database server
When Parameters Are Set Incorrectly
Some parameters have a minimum setting below which an Oracle instance will not start. For
other parameters, setting the value too low or too high may cause Oracle to perform badly, but
it will still run. Also, Oracle may convert some values outside the acceptable range to usable
levels.
If a parameter value is too low or too high, or you have reached the maximum for some
resource, then Oracle returns an error. Frequently, you can wait a short while and retry the
operation when the system is not as busy. If a message occurs repeatedly, then you should shut
down the instance, adjust the relevant parameter, and restart the instance.
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
12
******************************************************************************
**
DATAGUARD QUESTIONS
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
Oracle Data Guard Interview questions
What are the types of Oracle Data Guard?
Oracle Data Guard classified in to two types based on way of creation and method used for
Redo Apply. They are as follows.
1. Physical standby (Redo Apply technology)
2. Logical standby (SQL Apply Technology)
What are the advantages in using Oracle Data Guard?
Following are the different benefits in using Oracle Data Guard feature in your environment.
13
1. High Availability.
2. Data Protection.
3. Off loading Backup operation to standby database.
4. Automatic Gap detection and Resolution in standby database.
5. Automatic Role Transition using Data Guard Broker.
What are the different services available in Oracle Data Guard?
Following are the different Services available in Oracle Data Guard of Oracle database.
1. Redo Transport Services.
2. Log Apply Services.
3. Role Transitions.
What are the different Protection modes available in Oracle Data Guard?
Following are the different protection modes available in Data Guard of Oracle database you
can use any one based on your application requirement.
1. Maximum Protection
2. Maximum Availability
3. Maximum Performance
How to check what protection mode of primary database in your Oracle Data Guard?
14
By using following query you can check protection mode of primary database in your Oracle
Data Guard setup.
SELECT PROTECTION_MODE FROM V$DATABASE;
For Example:
SQL> select protection_mode from v$database;
PROTECTION_MODE
——————————–
MAXIMUM PERFORMANCE
How to change protection mode in Oracle Data Guard setup?
By using following query your can change the protection mode in your primary database after
setting up required value in corresponding LOG_ARCHIVE_DEST_n parameter in primary
database for corresponding standby database.
ALTER DATABASE SET STANDBY DATABASE TO MAXIMUM
[PROTECTION|PERFORMANCE|AVAILABILITY];
Example:
alter database set standby database to MAXIMUM PROTECTION;
15
What are the advantages of using Physical standby database in Oracle Data Guard?
Advantages of using Physical standby database in Oracle Data Guard are as follows.
* High Availability.
* Load balancing (Backup and Reporting).
* Data Protection.
* Disaster Recovery.
What is physical standby database in Oracle Data Guard?
Oracle Standby database are divided into physical standby database or logical standby database
based on standby database creation and redo log apply method. Physical standby database are
created as exact copy i.e block by block copy of primary database. In physical standby database
transactions happen in primary database are synchronized in standby database by using Redo
Apply method by continuously applying redo data on standby database received from primary
database. Physical standby database can offload the backup activity and reporting activity from
Primary database. Physical standby database can be opened for read-only transactions but redo
apply won’t happen during that time. But from 11g onwards using Active Data Guard option
(extra purchase) you can simultaneously open the physical standby database for read-only
access and apply redo logs received from primary database.
What is Logical standby database in Oracle Data Guard?
Oracle Standby database are divided into physical standby database or logical standby database
based on standby database creation and redo log apply method. Logical standby database can
be created similar to Physical standby database and later you can alter the structure of logical
standby database. Logical standby database uses SQL Apply method to synchronize logical
standby database with primary database. This SQL apply technology converts the received redo
16
logs to SQL statements and continuously apply those SQL statements on logical standby
database to make standby database consistent with primary database. Main advantage of
Logical standby database compare to physical standby database is you can use Logical standby
database for reporting purpose during SQL apply i.e Logical standby database must be open
during SQL apply. Even though Logical standby database are opened for read/write mode,
tables which are in synchronize with primary database are available for read-only operations
like reporting, select queries and adding index on those tables and creating materialized views
on those tables. Though Logical standby database has advantage on Physical standby database
it has some restriction
on data-types, types of DDL, types of DML and types of tables.
What are the advantages of Logical standby database in Oracle Data Guard?
* Better usage of resource
* Data Protection
* High Availability
* Disaster Recovery
What is the usage of DB_FILE_NAME_CONVERT parameter in Oracle Data Guard setup?
DB_FILE_NAME_CONVERT parameter is used in Oracle Data Guard setup that to in standby
databases. DB_FILE_NAME_CONVERT parameter are used to update the location of data files in
standby database. These parameter are used when you are using different directory structure in
standby database compare to primary database data files location.
What is the usage of LOG_FILE_NAME_CONVERT parameter in Oracle Data Guard setup?
LOG_FILE_NAME_CONVERT parameter is used in Oracle Data Guard setup that to in standby
databases. LOG_FILE_NAME_CONVERT parameter are used to update the location of redo log
files in standby database. These parameter are used when you are using different directory
structure in standby database compare to primary database redo log file location.
17
Step for Physical Standby
These are the steps to follow:
1. Enable forced logging
2. Create a password file
3. Configure a standby redo log
4. Enable archiving
5. Set up the primary database initialization parameters
6. Configure the listener and tnsnames to support the database on both nodes
col name format a20
col thread# format 999
col sequence# format 999
col first_change# format 999999
col next_change# format 999999
18
SELECT thread#, sequence# AS “SEQ#”, name, first_change# AS “FIRSTSCN”,
next_change# AS “NEXTSCN”,archived, deleted,completion_time AS “TIME”
FROM v$archived_log
V$ log_history
Tell me about parameter which is used for standby database?
Log_Archive_Dest_n
Log_Archive_Dest_State_n
Log_Archive_Config
Log_File_Name_Convert
Standby_File_Managment
DB_File_Name_Convert
19
DB_Unique_Name
Control_Files
Fat_Client
Fat_Server
The LOG_ARCHIVE_CONFIG parameter enables or disables the sending of redo streams to the
standby sites. The DB_UNIQUE_NAME of the primary database is dg1 and the
DB_UNIQUE_NAME of the standby database is dg2. The primary database is configured to ship
redo log stream to the standby database. In this example, the standby database service is dg2.
Next, STANDBY_FILE_MANAGEMENT is set to AUTO so that when Oracle files are added or
dropped from the primary database, these changes are made to the standby databases
automatically. The STANDBY_FILE_MANAGEMENT is only applicable to the physical standby
databases.
Setting the STANDBY_FILE_MANAGEMENT parameter to AUTO is is recommended when using
Oracle Managed Files (OMF) on the primary database. Next, the primary database must be
running in ARCHIVELOG mode.
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
20
**
DATA COMMANDS
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
-Oracle Database 11g: New Features in DataGuard
Active Data Guard
In Oracle Database 10g and below you could open the physical standby database for read-only
activities, but only after stopping the recovery process.
In Oracle 11g, you can query the physical standby database in real time while applying the
archived logs. This means standby continue to be in sync with primary but can use the standby
for reporting.
Let us see the steps now..
First, cancel the managed standby recovery:
21
SQL> alter database recover managed standby database cancel;
Database altered.
Then, open the database as read only:
SQL> alter database open read only;
Database altered.
While the standby database is open in read-only mode, you can resume the managed recovery
process.
SQL> alter database recover managed standby database disconnect;
Database altered.
Snapshot Standby
In Oracle Database 11g, physical standby database can be temporarily converted into an
updateable one called Snapshot Standby Database.
In that mode, you can make changes to database. Once the test is complete, you can rollback
the changes made for testing and convert the database into a standby undergoing the normal
22
recovery. This is accomplished by creating a restore point in the database, using the Flashback
database feature to flashback to that point and undo all the changes.
Steps:
Configure the flash recovery area, if it is not already done.
SQL> alter system set db_recovery_file_dest_size = 2G;
System altered.
SQL> alter system set db_recovery_file_dest= '+FRADG';
System altered.
Stop the recovery.
SQL> alter database recover managed standby database cancel;
Database altered.
Convert this standby database to snapshot standby using command
SQL> alter database convert to snapshot standby;
23
Database altered.
Now recycle the database
SQL> shutdown immediate
...
SQL> startup
ORACLE instance started.
Database is now open for read/write operations
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
---------- ---------------READ WRITE SNAPSHOT STANDBY
After your testing is completed, you would want to convert the snapshot standby database back
to a regular physical standby database by following the steps below
SQL> connect / as sysdba
Connected.
SQL> shutdown immediate
24
SQL> startup mount
...
Database mounted.
SQL> alter database convert to physical standby;
Database altered.
Now shutdown, mount the database and start managed recovery.
SQL> shutdown
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
...
Database mounted.
Start the managed recovery process
SQL> alter database recover managed standby database disconnect;
25
Now the standby database is back in managed recovery mode. When the database was in
snapshot standby mode, the archived logs from primary were not applied to it. They will be
applied now.
In 10g this can be done by following steps .. DR failover test with flashback
Redo Compression
In Oracle Database 11g you can compress the redo that goes across to the standby server via
SQL*Net using a parameter compression set to true. This works only for the logs shipped during
the gap resolution. Here is the command you can use to enable compression.
alter system set log_archive_dest_2 = 'service=STDBYDB LGWR ASYNC
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=STDBYDB compression=enable
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
26
**
DATA PUMP Questions
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
******************************************************************************
******************************************************************************
**
1. What is use of CONSISTENT option in exp?
Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.
2. What is use of DIRECT=Y option in exp?
Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the
SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.
3. What is use of COMPRESS option in exp?
Imports into one extent. Specifies how export will manage the initial extent for the table data.
This parameter is helpful during database re-organization. Export the objects (especially tables
and indexes) with COMPRESS=Y. If table was spawning 20 Extents of 1M each (which is not
desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL
generated will have initial of 20M. Later on when importing the extents will be coalesced.
Sometime it is found desirable to export with COMPRESS=N, in situations where you do not
have contiguous space on disk (tablespace), and do not want imports to fail.
27
4. How to improve exp performance?
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).
5. How to improve imp performance?
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set
INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after
the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in
Oracle 11g) during import.
6. What is use of INDEXFILE option in imp?
28
Will write DDLs of the objects in the dumpfile into the specified file.
7. What is use of IGNORE option in imp?
Will ignore the errors during import and will continue the import.
8. What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.
9. Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
10. How to improve expdp performance?
Using parallel option which increases worker threads. This should be set based on the number
of cpus.
11. How to improve impdp performance?
Using parallel option which increases worker threads. This should be set based on the number
29
of cpus.
12. In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how
it will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the
JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how
much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or
TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or
TABLE.
13. What is the order of importing objects in impdp?
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
30
14. How to import only metadata?
CONTENT= METADATA_ONLY
15. How to import into different user/tablespace/datafile/table?
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
Data Pump commands
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
31
******
How to work with Oracle Data Pump?
Oracle Data Pump
Oracle Data Pump is a feature of Oracle Database 10g that enables very fast bulk data and
metadata movement between Oracle databases. Oracle Data Pump provides new high-speed,
parallel Export and Import utilities (expdp and impdp) as well as a Web-based Oracle Enterprise
Manager interface.
Data Pump Export and Import utilities are typically much faster than the original Export and
Import Utilities. A single thread of Data Pump Export is about twice as fast as original Export,
while Data Pump Import is 15-45 times fast than original Import.
Data Pump jobs can be restarted without loss of data, whether or not the stoppage was
voluntary or involuntary.
Data Pump jobs support fine-grained object selection. Virtually any type of object can be
included or excluded in a Data Pump job.
Data Pump supports the ability to load one instance directly from another (network import) and
unload a remote instance (network export).
Data Pump Export (expdp) :-
For this example, once your export your database before that you must be give privilege on this
user. If you need to export you can give " EXP_FULL_DATABASE " and if you need import you
can give " IMP_FULL_DATABASE "
SQL> CONNECT SYS/BABU@KEYSTONE AS SYSDBA
Connected.
32
SQL> GRANT CREATE ANY DIRECTORY TO ORTHONOVC16;
Grant succeeded.
SQL> CREATE OR REPLACE DIRECTORY OTHOC16 AS 'D:\ORTHOC16';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY OTHOC16 TO ORTHONOVC16;
Grant succeeded.
SQL> GRANT EXP_FULL_DATABASE,IMP_FULL_DATABASE TO ORTHONOVC16;
Grant succeeded.
Table level Export :-
SQL> HOST expdp ORTHONOVC16/ORTHONOVC16@KEYSTONE tables=G_USER
DIRECTORY=OTHOC16 DUMPFILE=ORTHO_G_USER.DMP LOGFILE=ORTHOLOG.LOG
The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.
33
Schema level export :-
SQL> HOST expdp ORTHONOVC16/ORTHONOVC16@KEYSTONE SCHEMAS=ORTHONOVC16
DIRECTORY=OTHOC16 DUMPFILE=ORTHONOVC16.DMP LOGFILE=ORTHONOVC16.LOG
DataBase level export :SQL> HOST expdp ORTHONOVC16/ORTHONOVC16@KEYSTONE FULL=Y DIRECTORY=OTHOC16
DUMPFILE=DBORTHO.DMP LOGFILE=DBORTHO.LOG
Other export's :
Exclude = View, Proceudre, Function , Constraint , Index
Include = Table: " in ( 'emp') "
Content = ALL (by default ) / data_only / metadata_only
Estimate_Only = Before export your dumpfile you can estimate your dumpfile size using the
bellow
parameter " ESTIMATE_ONLY = Y "
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
34
Differnece B/W exp/imp and
expdp/impdp
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
******************************************************************************
******************************************************************************
********
Basic differences between Data pump and Export/Import
If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of
oracle database. Oracle 10g introduces a new feature called data pump export and import.Data
pump export/import differs from original export/import. The difference is listed below.
1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and
Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump
Export and Import.
2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL
commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved
performance.
4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod
and then impdp full=y where in original export/import does not always exhibit this behavior.
35
5)Expdp/Impdp access files on the server rather than on the client.
6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single
sequential dump file.
7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in
original export/import we could directly compress the dump by using pipes.
8)The Data Pump method for moving data between different database versions is different than
the method used by original Export/Import.
9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any
row violates an active constraint, the load is discontinued and no data is loaded. This is different
from original Import, which logs any rows that are in violation and continues with the load.
10)Expdp/Impdp consume more undo tablespace than original Export and Import.
11)If a table has compression enabled, Data Pump Import attempts to compress the data being
loaded. Whereas, the original Import utility loaded data in such a way that if a even table had
compression enabled, the data was not compressed upon import.
12)Data Pump supports character set conversion for both direct path and external tables. Most
of the restrictions that exist for character set conversions in the original Import utility do not
apply to Data Pump. The one case in which character set conversions are not supported under
the Data Pump is when using transportable tablespaces.
13)There is no option to merge extents when you re-create tables. In original Import, this was
provided by the COMPRESS parameter. Instead, extents are reallocated according to storage
parameters for the target table.
36
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
Data Pump SCenarios
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
Scenarios
1) Exporting full database
2) exporting schemas ( nothing but Users)
3) Exporting Individual Tables
4) Exporting Tablespace.
37
Example of Exporting Full Database
The following example shows how to export full database
$exp USERID=scott/tiger FULL=y FILE=myfull.dmp
In the above command, FILE option specifies the name of the dump file, FULL option specifies
that you want to export the full database, USERID option specifies the user account to connect
to the database. Note, to perform full export the user should have DBA or EXP_FULL_DATABASE
privilege.
Example of Exporting Schemas
To export Objects stored in a particular schemas you can run export utility with the following
arguments
$exp USERID=scott/tiger OWNER=(SCOTT,ALI) FILE=exp_own.dmp
The above command will export all the objects stored in SCOTT and ALI’s schema.
Exporting Individual Tables
To export individual tables give the following command
$exp USERID=scott/tiger TABLES=(scott.emp,scott.sales) FILE=exp_tab.dmp
38
This will export scott’s emp and sales tables.
Exporting Tablespaces
To export individual tablespace give the following command
exp userid=scott/tiger TABLESPACES=(tbsp1, tbsp2, ... tbsp#) file=exp_tablespace.dmp
Query Mode
From Oracle 8i one can use the QUERY= export parameter to selectively unload a subset of the
data from a table. You may need to escape special chars on the command line, for example:
query=\”where deptno=10\”. Look at these examples:
exp scott/tiger tables=emp query="where deptno=10"
exp scott/tiger file=abc.dmp tables=abc query=\"where sex=\'f\'\" rows=yes
Exporting Consistent Image of the tables
If you include CONSISTENT=Y option in export command argument then, Export utility will
export a consistent image of the table i.e. the changes which are done to the table during
export operation will not be exported.
Usage of a parameter file:
39
exp userid=scott/tiger@orcl parfile=export.txt
… where export.txt contains:
BUFFER=10000000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y
* Exports broken up into 3 categories
o Incremental - will backup only the tables that have changed since the last incremental,
cumulative or complete export.
o Cumulative - will backup only the tables that have changed since the last cumulative or
complete export.
o Complete - will backup the entire database, with the exception of updates on tables
where incremental and cumulative exports are tracked. The parameter "INCTYPE=COMPLETE"
must be specified for those tables with updates and are tracked to be backed up.
* Export broken up into 3 modes
o Exporting table - export table or group of tables
o Exporting users - can backup all objects in a particular users schema including
tables/data/grants/indexes
o Exporting full database - exports the entire database
* Import broken into the same 3 modes as export
o Importing table/s
40
o Importing user/s
o Importing full database
Data pump EXPDP/IMPDP
Data pump expdp/impdp utilities were first introduced in Oracle 10g (10.1). These new utilities
were considered high performance replacements for the legacy export/import had been in use
since Oracle 7. Some of the key advantages are: greater data/metadata filtering, handling of
partitioned tables during import and support of the full range of data types.
* Expdp/Impdp provide same categories and modes as the legacy exp/imp utility
o Plus more
* Legacy exp/imp commands are transformed into new parameters
o Legacy
Data pump
o user/s
schema/s
o file
dumpfile
o log
logfile
o For full list of all of the new parameter mappings visit this link for dp_legacy support
documentation for 11.2 which is latest. Oracle 11.2 DP_Legacy Parameter Mappings
* Expdp/Impdp offers ability to
o Stop/Kill/Attach to a datapump job
o Export/Import data in parallel
+ Need to use the '%U' in your dumpfile name
# dumpfile=exp_test1_09202012_%U.dmp
* Expdp/Impdp major features
o Overall performance of datapump import is 15-45x faster than legacy import
41
o Network mode - allows import over dblink
o Restart - ability to stop and reattach to the job
o Include/Exclude - allow/disallow objects to be included/excluded during export/import
o Content - metadata_only, data_only or all
o Estimate - provides the estimated size of dumpfiles so space allocation can be made
beforehand
o Compression - ability to compress your data up to 5x
RMAN All major Restoration and Recovery Scenarios
1. Complete Closed Database Recovery.
- It is assumed that your control files are still accessible.
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
SQL> shutdown abort;
ORACLE instance shut down.
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
RMAN needs to be able to access the control file to determine which backup sets are necessary
42
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
SQL> startup mount;
Oracle instance started.
Database mounted.
iii) Use RMAN to restore the database and recover the database.
To use RMAN, connect to the target database:
$ rman target / catalog rman/rman@rmancat
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
RMAN> restore database;
RMAN> recover database;
SQL> alter database open;
43
2. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSTEM;
RMAN> recover tablespace SYSTEM;
RMAN> alter database open;
3. Complete Open Database Recovery. Non system tablespace is missing, database is up
$ rman target /
RMAN> sql ‘alter tablespace <tbs> offline immediate’ ;
RMAN> restore tablespace <tbs> ;
RMAN> recover tablespace <tbs> ;
RMAN> sql ‘alter tablespace <tbs> online’ ;
To restore/recover only datafile(s)
44
$ rman target /
RMAN>. sql 'alter database datafile <file#> offline';
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online' ;
Note: datafile_name(within single quotes) can also be used instead of file#
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
alter database datafile <file#> offline;
alter database open;
exit;
45
$rman target /
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter tablespace <tablespace_name> online';
Note: datafile_name(within single quotes) can also be used instead of file#
5.To restore a tablespace to a new location
$ rman target / catalog rman/rman@rcat
Take the tablespace offline.
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Restore the datafile to the new location.
Switch the restored datafile so that the control file considers it the current datafile.
To restore the datafiles for tablespace USERS to a new location on disk:
run {
allocate channel ch1 type disk;
46
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
set newname for datafile '/disk1/oracle/users_1.dbf' to '/disk2/oracle/users_1.dbf';
restore tablespace users;
# make the control file recognize the restored file as current
switch datafile all;
}
RMAN> recover tablespace USERS;
RMAN> sql 'alter tablespace USERS online';
6. Recovery of a Datafile that has no backups (database is up)
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
$ rman target /
RMAN> sql ‘alter database datafile <file#> offline’;
RMAN> restore datafile <file#> ;
-- no need to create a blank file, restore command takes care of that.
RMAN> recover datafile <file#>;
47
RMAN> sql 'alter database datafile <file#> online';
Note: datafile_name(within single quotes) can also be used instead of file#
7. Control File Recovery
Case-1 – Autobackup is available
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from autobackup;
RMAN> alter database mount;
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
48
Case-2 – Autobackup is not available but controlfile backupset is available
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from <backupset_location>;
RMAN> alter database mount;
RMAN> restore database; --required if datafile(s) have been added after the backup
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
49
8. Incomplete Recovery, Until time/sequence/scn
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
If the database is open, shutdown it to perform full restore.
shutdown abort
startup nomount
=============================
$ rman target / rcvcat rman/rman@rcat
RMAN> run {
set until time "to_date('2012/01/23 16:00:00',
'YYYY/MM/DD HH14:MI:SS')";
allocate channel d1 type disk;
restore controlfile to '/tmp/cf';
replicate controlfile from '/tmp/cf';
sql 'alter database mount';
restore database;
recover database;
}
50
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
SQL> startup mount
Case-1 If the group is INACTIVE
Then it is not needed for crash recovery
Clear the archived or unarchived group. (For archive status, check in v$log)
1.1 Clearing Inactive, Archived Redo
alter database clear logfile group 1 ;
alter database open ;
51
1.2 Clearing Inactive, Not-Yet-Archived Redo
alter database clear unarchived logfile group 1 ;
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
alter database clear unarchived logfile group 1 unrecoverable datafile;
Take a complete backup of database.
And now open database:
alter database open ;
Case-2 If the group is ACTIVE
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
Case-3 If the group is CURRENT
Restore backup and perform an incomplete recovery.
52
And open database using resetlogs
alter database open resetlogs;
10. Restoring database to new host from RMAN backup
1) Restore database backup and archive log backup(if hot) to target server.
2) Copy ORACLE_HOME from source to target if its not already there.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
RMAN> backup current controlfile to '<path/filename.ctl>';
or
SQL> alter database backup controlfile to '<path/filename.ctl>';
4) Copy this controlfile backup to target node
5) On target:
Create pfile or copy it from target and change following parameters:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
53
CONTROL_FILES
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
$ rman target /
RMAN> sql ‘create spfile from pfile’ ;
RMAN> startup nomount ;
RMAN> restore controlfile from '<PATH>/filename.ctl>' ;
RMAN> alter database mount ;
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
RMAN> restore database until time ‘<date/time>’ ;
OR
RMAN> restore database until scn <scn_number> ;
OR
RMAN> restore database from tag ‘<tag_name>’ ;
And now…
RMAN> recover database;
RMAN> alter database open resetlogs ;
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
54
11. Restoring backups from tape.
Use the following steps to restore backups from tape.
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
allocate channel t1 type 'SBT_TAPE' parms="ENV=(NB_ORA_CLIENT=ibm5003bk)";
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
}
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
55
Rman Scenarios
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
RMAN All major Restoration and Recovery Scenarios
1. Complete Closed Database Recovery.
- It is assumed that your control files are still accessible.
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
SQL> shutdown abort;
ORACLE instance shut down.
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
RMAN needs to be able to access the control file to determine which backup sets are necessary
56
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
SQL> startup mount;
Oracle instance started.
Database mounted.
iii) Use RMAN to restore the database and recover the database.
To use RMAN, connect to the target database:
$ rman target / catalog rman/rman@rmancat
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
RMAN> restore database;
RMAN> recover database;
SQL> alter database open;
57
2. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSTEM;
RMAN> recover tablespace SYSTEM;
RMAN> alter database open;
3. Complete Open Database Recovery. Non system tablespace is missing, database is up
$ rman target /
RMAN> sql ‘alter tablespace <tbs> offline immediate’ ;
RMAN> restore tablespace <tbs> ;
RMAN> recover tablespace <tbs> ;
RMAN> sql ‘alter tablespace <tbs> online’ ;
To restore/recover only datafile(s)
58
$ rman target /
RMAN>. sql 'alter database datafile <file#> offline';
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online' ;
Note: datafile_name(within single quotes) can also be used instead of file#
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
alter database datafile <file#> offline;
alter database open;
exit;
59
$rman target /
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter tablespace <tablespace_name> online';
Note: datafile_name(within single quotes) can also be used instead of file#
5.To restore a tablespace to a new location
$ rman target / catalog rman/rman@rcat
Take the tablespace offline.
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Restore the datafile to the new location.
Switch the restored datafile so that the control file considers it the current datafile.
To restore the datafiles for tablespace USERS to a new location on disk:
run {
allocate channel ch1 type disk;
60
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
set newname for datafile '/disk1/oracle/users_1.dbf' to '/disk2/oracle/users_1.dbf';
restore tablespace users;
# make the control file recognize the restored file as current
switch datafile all;
}
RMAN> recover tablespace USERS;
RMAN> sql 'alter tablespace USERS online';
6. Recovery of a Datafile that has no backups (database is up)
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
$ rman target /
RMAN> sql ‘alter database datafile <file#> offline’;
RMAN> restore datafile <file#> ;
-- no need to create a blank file, restore command takes care of that.
RMAN> recover datafile <file#>;
61
RMAN> sql 'alter database datafile <file#> online';
Note: datafile_name(within single quotes) can also be used instead of file#
7. Control File Recovery
Case-1 – Autobackup is available
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from autobackup;
RMAN> alter database mount;
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
62
Case-2 – Autobackup is not available but controlfile backupset is available
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from <backupset_location>;
RMAN> alter database mount;
RMAN> restore database; --required if datafile(s) have been added after the backup
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
63
8. Incomplete Recovery, Until time/sequence/scn
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
If the database is open, shutdown it to perform full restore.
shutdown abort
startup nomount
=============================
$ rman target / rcvcat rman/rman@rcat
RMAN> run {
set until time "to_date('2012/01/23 16:00:00',
'YYYY/MM/DD HH14:MI:SS')";
allocate channel d1 type disk;
restore controlfile to '/tmp/cf';
replicate controlfile from '/tmp/cf';
sql 'alter database mount';
restore database;
recover database;
}
64
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
SQL> startup mount
Case-1 If the group is INACTIVE
Then it is not needed for crash recovery
Clear the archived or unarchived group. (For archive status, check in v$log)
1.1 Clearing Inactive, Archived Redo
alter database clear logfile group 1 ;
alter database open ;
65
1.2 Clearing Inactive, Not-Yet-Archived Redo
alter database clear unarchived logfile group 1 ;
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
alter database clear unarchived logfile group 1 unrecoverable datafile;
Take a complete backup of database.
And now open database:
alter database open ;
Case-2 If the group is ACTIVE
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
Case-3 If the group is CURRENT
Restore backup and perform an incomplete recovery.
66
And open database using resetlogs
alter database open resetlogs;
10. Restoring database to new host from RMAN backup
1) Restore database backup and archive log backup(if hot) to target server.
2) Copy ORACLE_HOME from source to target if its not already there.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
RMAN> backup current controlfile to '<path/filename.ctl>';
or
SQL> alter database backup controlfile to '<path/filename.ctl>';
4) Copy this controlfile backup to target node
5) On target:
Create pfile or copy it from target and change following parameters:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
67
CONTROL_FILES
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
$ rman target /
RMAN> sql ‘create spfile from pfile’ ;
RMAN> startup nomount ;
RMAN> restore controlfile from '<PATH>/filename.ctl>' ;
RMAN> alter database mount ;
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
RMAN> restore database until time ‘<date/time>’ ;
OR
RMAN> restore database until scn <scn_number> ;
OR
RMAN> restore database from tag ‘<tag_name>’ ;
And now…
RMAN> recover database;
RMAN> alter database open resetlogs ;
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
68
11. Restoring backups from tape.
Use the following steps to restore backups from tape.
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
allocate channel t1 type 'SBT_TAPE' parms="ENV=(NB_ORA_CLIENT=ibm5003bk)";
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
}
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
RMAN All major Restoration and Recovery Scenarios
1. Complete Closed Database Recovery.
69
- It is assumed that your control files are still accessible.
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
SQL> shutdown abort;
ORACLE instance shut down.
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
RMAN needs to be able to access the control file to determine which backup sets are necessary
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
SQL> startup mount;
Oracle instance started.
Database mounted.
iii) Use RMAN to restore the database and recover the database.
To use RMAN, connect to the target database:
$ rman target / catalog rman/rman@rmancat
70
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
RMAN> restore database;
RMAN> recover database;
SQL> alter database open;
2. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSTEM;
RMAN> recover tablespace SYSTEM;
RMAN> alter database open;
71
3. Complete Open Database Recovery. Non system tablespace is missing, database is up
$ rman target /
RMAN> sql ‘alter tablespace <tbs> offline immediate’ ;
RMAN> restore tablespace <tbs> ;
RMAN> recover tablespace <tbs> ;
RMAN> sql ‘alter tablespace <tbs> online’ ;
To restore/recover only datafile(s)
$ rman target /
RMAN>. sql 'alter database datafile <file#> offline';
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online' ;
Note: datafile_name(within single quotes) can also be used instead of file#
72
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
alter database datafile <file#> offline;
alter database open;
exit;
$rman target /
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter tablespace <tablespace_name> online';
Note: datafile_name(within single quotes) can also be used instead of file#
5.To restore a tablespace to a new location
73
$ rman target / catalog rman/rman@rcat
Take the tablespace offline.
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Restore the datafile to the new location.
Switch the restored datafile so that the control file considers it the current datafile.
To restore the datafiles for tablespace USERS to a new location on disk:
run {
allocate channel ch1 type disk;
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
set newname for datafile '/disk1/oracle/users_1.dbf' to '/disk2/oracle/users_1.dbf';
restore tablespace users;
# make the control file recognize the restored file as current
switch datafile all;
}
RMAN> recover tablespace USERS;
RMAN> sql 'alter tablespace USERS online';
74
6. Recovery of a Datafile that has no backups (database is up)
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
$ rman target /
RMAN> sql ‘alter database datafile <file#> offline’;
RMAN> restore datafile <file#> ;
-- no need to create a blank file, restore command takes care of that.
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online';
Note: datafile_name(within single quotes) can also be used instead of file#
7. Control File Recovery
Case-1 – Autobackup is available
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
75
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from autobackup;
RMAN> alter database mount;
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
Case-2 – Autobackup is not available but controlfile backupset is available
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from <backupset_location>;
RMAN> alter database mount;
RMAN> restore database; --required if datafile(s) have been added after the backup
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
76
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
8. Incomplete Recovery, Until time/sequence/scn
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
If the database is open, shutdown it to perform full restore.
shutdown abort
77
startup nomount
=============================
$ rman target / rcvcat rman/rman@rcat
RMAN> run {
set until time "to_date('2012/01/23 16:00:00',
'YYYY/MM/DD HH14:MI:SS')";
allocate channel d1 type disk;
restore controlfile to '/tmp/cf';
replicate controlfile from '/tmp/cf';
sql 'alter database mount';
restore database;
recover database;
}
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
78
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
SQL> startup mount
Case-1 If the group is INACTIVE
Then it is not needed for crash recovery
Clear the archived or unarchived group. (For archive status, check in v$log)
1.1 Clearing Inactive, Archived Redo
alter database clear logfile group 1 ;
alter database open ;
1.2 Clearing Inactive, Not-Yet-Archived Redo
alter database clear unarchived logfile group 1 ;
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
alter database clear unarchived logfile group 1 unrecoverable datafile;
Take a complete backup of database.
79
And now open database:
alter database open ;
Case-2 If the group is ACTIVE
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
Case-3 If the group is CURRENT
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
10. Restoring database to new host from RMAN backup
1) Restore database backup and archive log backup(if hot) to target server.
2) Copy ORACLE_HOME from source to target if its not already there.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
80
RMAN> backup current controlfile to '<path/filename.ctl>';
or
SQL> alter database backup controlfile to '<path/filename.ctl>';
4) Copy this controlfile backup to target node
5) On target:
Create pfile or copy it from target and change following parameters:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
CONTROL_FILES
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
$ rman target /
RMAN> sql ‘create spfile from pfile’ ;
RMAN> startup nomount ;
RMAN> restore controlfile from '<PATH>/filename.ctl>' ;
RMAN> alter database mount ;
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
81
RMAN> restore database until time ‘<date/time>’ ;
OR
RMAN> restore database until scn <scn_number> ;
OR
RMAN> restore database from tag ‘<tag_name>’ ;
And now…
RMAN> recover database;
RMAN> alter database open resetlogs ;
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
11. Restoring backups from tape.
Use the following steps to restore backups from tape.
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
allocate channel t1 type 'SBT_TAPE' parms="ENV=(NB_ORA_CLIENT=ibm5003bk)";
82
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
}
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
RMAN All major Restoration and Recovery Scenarios
1. Complete Closed Database Recovery.
- It is assumed that your control files are still accessible.
- You have a backup, done for example with backup database plus archivelog;
i) Your first step is to make sure that the target database is shut down:
$ sqlplus “/ as SYSDBA”
SQL> shutdown abort;
ORACLE instance shut down.
ii) Next, you need to start up your target database in mount mode.
- RMAN cannot restore datafiles unless the database is at least in mount mode, because
83
RMAN needs to be able to access the control file to determine which backup sets are necessary
to recover the database.
- If the control file isn't available, you have to recover it first. Issue the STARTUP MOUNT
command shown in the following example to mount the database:
SQL> startup mount;
Oracle instance started.
Database mounted.
iii) Use RMAN to restore the database and recover the database.
To use RMAN, connect to the target database:
$ rman target / catalog rman/rman@rmancat
- When the restore command is executed, RMAN will automatically go to its last good backup
set and restore the datafiles to the state they were in when that backup set was created.
- When restoring database files, RMAN reads the datafile header and makes the
determination as to whether the file needs to be restored. The recovery is done by allocating a
channel for I/O and then issuing the RMAN restore database command.
you don't need to allocate a channel explicitly. Instead, you can use the default channel
mode:
RMAN> restore database;
RMAN> recover database;
SQL> alter database open;
84
2. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the
database can be opened without resetlogs option.
$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSTEM;
RMAN> recover tablespace SYSTEM;
RMAN> alter database open;
3. Complete Open Database Recovery. Non system tablespace is missing, database is up
$ rman target /
RMAN> sql ‘alter tablespace <tbs> offline immediate’ ;
RMAN> restore tablespace <tbs> ;
RMAN> recover tablespace <tbs> ;
RMAN> sql ‘alter tablespace <tbs> online’ ;
To restore/recover only datafile(s)
85
$ rman target /
RMAN>. sql 'alter database datafile <file#> offline';
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online' ;
Note: datafile_name(within single quotes) can also be used instead of file#
4.Complete Open Database Recovery (when the database is initially closed). Non system
tablespace is missing
A user datafile is reported missing when trying to startup the database. The datafile can be
turned offline and the database started up. Restore and recovery are performed using Rman.
After recovery is performed the datafile can be turned online again.
sqlplus “/ as sysdba “
startup mount
alter database datafile <file#> offline;
alter database open;
exit;
86
$rman target /
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter tablespace <tablespace_name> online';
Note: datafile_name(within single quotes) can also be used instead of file#
5.To restore a tablespace to a new location
$ rman target / catalog rman/rman@rcat
Take the tablespace offline.
Specify an accessible location to which you can restore the damaged datafile for the offline
tablespace.
Restore the datafile to the new location.
Switch the restored datafile so that the control file considers it the current datafile.
To restore the datafiles for tablespace USERS to a new location on disk:
run {
allocate channel ch1 type disk;
87
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
set newname for datafile '/disk1/oracle/users_1.dbf' to '/disk2/oracle/users_1.dbf';
restore tablespace users;
# make the control file recognize the restored file as current
switch datafile all;
}
RMAN> recover tablespace USERS;
RMAN> sql 'alter tablespace USERS online';
6. Recovery of a Datafile that has no backups (database is up)
If a non system datafile that was not backed up since the last backup, is missing, recovery can
be performed if all archived logs since the creation of the missing datafile exist. Since the
database is up you can check the tablespace name and put it offline. The option offline
immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
$ rman target /
RMAN> sql ‘alter database datafile <file#> offline’;
RMAN> restore datafile <file#> ;
-- no need to create a blank file, restore command takes care of that.
RMAN> recover datafile <file#>;
88
RMAN> sql 'alter database datafile <file#> online';
Note: datafile_name(within single quotes) can also be used instead of file#
7. Control File Recovery
Case-1 – Autobackup is available
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the
one you have in place, and startup the Database. If both controlfiles are missing, the database
will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman
alway set configuration parameter autobackup of controlfile to ON.
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from autobackup;
RMAN> alter database mount;
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
89
Case-2 – Autobackup is not available but controlfile backupset is available
rman target /
RMAN> startup nomount;
RMAN> restore controlfile from <backupset_location>;
RMAN> alter database mount;
RMAN> restore database; --required if datafile(s) have been added after the backup
RMAN> recover database;
RMAN> alter database open resetlogs;
Make a new complete backup, as the database is open in a new incarnation and previous
archived log are not relevant.
Case-3 – If no backup is available, create the controlfile manually using script and then
recover as given above.
Note: RMAN automatically searches in specific locations for online and archived redo logs
during recovery that are not recorded in the RMAN repository, and catalogs any that it finds.
RMAN attempts to find a valid archived log in any of the current archiving destinations with the
current log format. The current format is specified in the initialization parameter file used to
start the instance (or all instances in a Real Application Clusters installation). Similarly, RMAN
attempts to find the online redo logs by using the filenames as specified in the control file.
90
8. Incomplete Recovery, Until time/sequence/scn
Incomplete recovery may be necessary when the database crashes and needs to be
recovered, and in the recovery process you find that an archived log is missing. In this case
recovery can only be made until the sequence before the one that is missing. Another scenario
for incomplete recovery occurs when an important object was dropped or incorrect data was
committed on it. In this case recovery needs to be performed until before the object was
dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence
that the 'until' recovery needs to be performed.
If the database is open, shutdown it to perform full restore.
shutdown abort
startup nomount
=============================
$ rman target / rcvcat rman/rman@rcat
RMAN> run {
set until time "to_date('2012/01/23 16:00:00',
'YYYY/MM/DD HH14:MI:SS')";
allocate channel d1 type disk;
restore controlfile to '/tmp/cf';
replicate controlfile from '/tmp/cf';
sql 'alter database mount';
restore database;
recover database;
}
91
Make a new complete backup, as the database is opened in new incarnation and previous
archive logs are not relevant.
9. Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios
can occur depending on the type of online redo log group affected by the failure and the
archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active,
then it is needed for crash recovery.
SQL> startup mount
Case-1 If the group is INACTIVE
Then it is not needed for crash recovery
Clear the archived or unarchived group. (For archive status, check in v$log)
1.1 Clearing Inactive, Archived Redo
alter database clear logfile group 1 ;
alter database open ;
92
1.2 Clearing Inactive, Not-Yet-Archived Redo
alter database clear unarchived logfile group 1 ;
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the
keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have
to be dropped because the redo necessary to bring it online is being cleared, and there is no
copy of it. )
alter database clear unarchived logfile group 1 unrecoverable datafile;
Take a complete backup of database.
And now open database:
alter database open ;
Case-2 If the group is ACTIVE
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
Case-3 If the group is CURRENT
Restore backup and perform an incomplete recovery.
93
And open database using resetlogs
alter database open resetlogs;
10. Restoring database to new host from RMAN backup
1) Restore database backup and archive log backup(if hot) to target server.
2) Copy ORACLE_HOME from source to target if its not already there.
3) If you dont have a controlfile backup which was taken after the cold backup then take a
control file backup on source.
RMAN> backup current controlfile to '<path/filename.ctl>';
or
SQL> alter database backup controlfile to '<path/filename.ctl>';
4) Copy this controlfile backup to target node
5) On target:
Create pfile or copy it from target and change following parameters:
IFILE
*_DUMP_DEST
LOG_ARCHIVE_DEST
94
CONTROL_FILES
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
$ rman target /
RMAN> sql ‘create spfile from pfile’ ;
RMAN> startup nomount ;
RMAN> restore controlfile from '<PATH>/filename.ctl>' ;
RMAN> alter database mount ;
RMAN> list backup ; - Note the scn number or time of the backup you want to restore
$ rman target /
RMAN> restore database until time ‘<date/time>’ ;
OR
RMAN> restore database until scn <scn_number> ;
OR
RMAN> restore database from tag ‘<tag_name>’ ;
And now…
RMAN> recover database;
RMAN> alter database open resetlogs ;
Note: Above method can also be used where you want to restore database from old backups
instead of latest one.
95
11. Restoring backups from tape.
Use the following steps to restore backups from tape.
$ export NLS_DATE_FORMAT=”yyyy-mm-dd hh24:mi”
RMAN> list backup ; -- Note the scn or time of the backup you want to restore.
RMAN> run{
allocate channel t1 type 'SBT_TAPE' parms="ENV=(NB_ORA_CLIENT=ibm5003bk)";
restore database until scn <scn_number>; --scn number as in list backup output
recover database ;
}
Notes:
1) until scn can be used with recover command as well for incomplete recovery.
Other option is to use set until within run block just before restore.
2) from tag ‘<tag_name>’ can also be used (instead of until clause) with restore command
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
96
******************************************************************************
******************************************************************************
**********
RMAN COMMANDS
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
LIST Commands :
LIST Commands are generic RMAN commands that will show/list various types of information
when executed within RMAN utility.
RMAN> list backup;
It lists all the backups taken using RMAN utility.
RMAN> list backup of database;
The above command lists out all the (individual) files that are in the backup.
RMAN> list backup of tablespace system;
97
The above command lists out all the files that are in the backup that belong to the tablespace
‘system’.
RMAN> list backup of control file;
The above command lists out all backups of the control file.
CONFIGURE Commands :
CONFIGURE commands in RMAN are used to configure/setup various initial settings:
RMAN>CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
The above commands indicates how many days the backup copies need to be retained. Default
is 1 day.
RMAN> CONFIGURE RETENTION POLICY CLEAR;
The above command resets the Retention policy to the default value of 1 day.
CONFIGURE BACKUP OPTIMIZATION ON;
The above command verifies to make sure that identical files are NOT backed up to the device
specified.
98
CONFIGURE BACKUP OPTIMIZATION CLEAR;
The above command resets the Optimization option to the default value.
SHOW Commands :
Show commands are used to show/display the configuration setting values.
RMAN> show all;
The above command shows all the current configuration settings on the screen.
RMAN> show retention policy;
The above command shows the current retention policy.
RMAN> show default device type;
The above command shows the default device type configured for backups.
RMAN> show datafile backup copies;
The above command shows the number of backup copies available in the target database.
99
BACKUP Commands :
Backup commands are the real commands which do the actual backup work.
RMAN> backup database;
The above command backs up the database (target).
RMAN> backup database include current controlfile plus archive log;
The above command backs up the target database and the current control file and archive log
files.
REPORT Commands :
Report commands list/report specific information. The difference between report and list
command is report command output is in a better format.
RMAN> report obsolete;
The above command reports which backups can be deleted.
DELETE Commands :
Delete commands delete/remove specific items from the server, repository catalog.
100
RMAN> delete obsolete;
The above command deletes all the backups that are older based on the retention policy setup.
RMAN> List incarnation;
List Summary of Backups
The summary of backups include backupset key, the status, device type, completion time etc,
RMAN> List Backup Summary;
RMAN> List expired Backup of archivelog all summary;
RMAN> List Backup of tablespace Test summary;
List Backups of various files
It provides the summary of the backups available for each datafile, controlfile, archivelog file
and spfile.
RMAN> List Backup By File;
Detailed Report
101
If you want the detailed report on the backups, then issue the following command.
RMAN> List Backup;
It lists the all available information about the backups.
Backups used for Recovery
To list the backups used for restore and recovery,
RMAN> list recoverable backup;
Expired Backups
The list backup shows both available and expired backups. To view only the expired backups,
RMAN> List expired Backup;
RMAN> List expired Backup summary;
RMAN> List expired Backup of Archivelog all;
RMAN> List expired Backup of datafile 10;
Listing Tablespace and Datafile Backups
102
RMAN> List Backup of Tablespace Test;
RMAN> List Backup of Datafile 4;
Listing Archivelog Backups
RMAN> List Archivelog all;
RMAN> List Archivelog all backedup 2 times to device type sbt;
Listing Controlfile and Spfile Backups
RMAN> List Backup of Controlfile;
RMAN> List Backup of Spfile;
The above list commands displayed information about the backusets. If you have performed
Image copy backups then you must use the list copy command as shown below,
RMAN> List Copy;
RMAN> List Copy of database;
RMAN> List Copy of tablespace test;
RMAN> List Copy of archivelog all;
RMAN> List Copy of archivelog from sequence 12345;
RMAN> List Copy of archivelog from sequence 1000 until sequence 1010;
RMAN> List Copy of Controlfile;
RMAN> List Copy of Spfile;
103
RMAN: Archivelogs lost
Problem: I have lost some of the archivelog files without taking backup. If I run the rman to
backup available archive logs, it throws error that the archivelog_seq# is not available.
Solution: run the following command.
RMAN> change archivelog all validate;
Now you run the backup archivelog command. RMAN will backup the available archivelogs
successfully.
Thanks
Posted by Vinod Dhandapani at Thursday, February 10, 2011 0 comments
Labels: Backup, RMAN
Monday, March 1, 2010
Block Change Tracking
RMAN Incremental Backups backup only the blocks that were changed since the lastest base
incremental backups. But RMAN had to scan the whole database to find the changed blocks.
Hence the incremental backups read the whole database and writes only the changed blocks.
Thus the incremental backups saves space but the reduction in the time is fairly neglegible.
Block Change Tracking (BCT) is a new feature in Oracle 10g. BCT enables RMAN to read only the
blocks that were changed since the lastest base incremental backups. Hence by enabling BCT,
RMAN reads only the changed blocks and writes only the changed blocks.
104
Without BCT, RMAN has to read every block in the database and compare the SCN in the block
with the SCN in the base backup. If the block's SCN is greater than the SCN in the base backup
then the block is a candidate for the new incremental backup. Usually only few blocks are
changed between backups and the RMAN has to do unncessary work of reading the whole
database.
BCT Stores the information about the blocks being changed inthe BlockChange Tracking File.
The background process that does this logging is Change Tracking Writer (CWTR).
BlockChange Tracking File
BCT File is one per database and in the case RAC, it is shared among all the instances. BCT File is
created in the location defined by the parameter DB_CREATE_FILE_DEST as OMF file.
To enable BCT
SQL> Alter Database Enable Block Change Tracking;
To disable BCT
105
SQL> Alter Database Disable Block Change Tracking;
To specify the BCT file location
SQL> Alter Database enable Block Change Tracking using File '/Backup/BCT/bct.ora';
A useful query,
SQL> Select Completion_time, datafile_blocks, blocks_read, blocks, used_change_tracking
From v$backup_datafile
where to_char(completion_time, 'dd/mon/yy') = to_char(sysdate, 'dd/mon/yy');
Where,
datafile_blocks is the total number of blocks in the datafile.
blocks_read is the total number of blocks read by RMAN
blocks is the total number of blocks backed up by the RMAN.
used_change_tracking if yes BCT is used, if no BCT is not used.
Thanks
106
Posted by Vinod Dhandapani at Monday, March 01, 2010 1 comments
Labels: 10g Features, Backup, RMAN
Wednesday, January 28, 2009
Restore Vs Recovery
Restore
Restore means using the backup files to replace the original files after a media failure.
Recovery
Recovery means bringing the database up to date using the restored files, archive logs and
online redo logs.
Thanks
Posted by Vinod Dhandapani at Wednesday, January 28, 2009 0 comments
Labels: Backup, Recovery
Tuesday, November 18, 2008
FRA Parameters
The following parameters are involved in setting up a FRA.
1. DB_CREATE_FILE_DEST - location for all OMF data files.
2. DB_CREATE_ONLINE_LOG_DEST_n - location for control files and online redo log files. If this
parameter is not set then oracle creates all three types of files in the DB_CREATE_FILE_DEST
location.
107
3. DB_RECOVERY_FILE_DEST_SIZE - Size of FRA.
4. DB_RECOVERY_FILE_DEST - Location of FRA.
5. LOG_ARCHIVE-DEST_N - Location of Archive log files.
For eg.,
DB_CREATE_FILE_DEST = /oradata/dbfiles/
LOG_ARCHIVE-DEST_1 = 'LOCATION=/export/archive/arc_dest1'
LOG_ARCHIVE-DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST'
DB_RECOVERY_FILE_DEST_SIZE =350GB
DB_RECOVERY_FILE_DEST='LOCATION=/fra/rec_area'
one copy of current control file and online redo log files are stored in FRA also.
Control Files, redolog files and FRA
DB_CREATE_ONLINE_DEST_n : Setting this init parameter enable Oracle to create OMF control
files and online redolog files in the location specified by the parameter.
DB_RECOVERY_FILE_DEST : Setting this init parameter enables Oracle to create OMF control
files and online redolog files in the FRA.
Specifying both the above parameters enable Oracle to create OMF based control and redolog
files on both the locations.
Omitting the above parameters will enable Oracle to create non-OMF based control and
108
redolog files in the system specific default location.
Thanks
Posted by Vinod Dhandapani at Tuesday, November 18, 2008 0 comments
Labels: Backup, FRA, Parameters
Setup Flash Recovery Area
To setup the FRA, you need to set two init paramters in the following order.
SQL> Alter system set DB_RECOVERY_FILE_DEST_SIZE=375809638400 scope=spfile;
SQL> Alter system set DB_RECOVERY_FILE_DEST=+ASM_FLASH scope=spfile;
When you configure FRA, LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST init
parameters become obsolete. You must use the parameters LOG_ARCHIVE_DEST_n.
Log_Archive_dest_10 is by default set to USE_DB_RECOVERY_FILE_DEST. Archived log files are
stored in the FRA by default. If you have configured other locations a copy of archived files will
also be placed in other locations as well.
USE_DB_RECOVERY_FILE_DEST refers to the FRA.
You can also use OEM and DBCA to configure FRA.
Disable FRA
109
Set DB_RECOVERY_FILE_DEST=''
Posted by Vinod Dhandapani at Tuesday, November 18, 2008 0 comments
Labels: Backup, FRA
Thursday, October 9, 2008
Convert ArchiveLog into NoArchivelog mode in RAC
The post has been moved to a different location.
Click Here to access the post.
Posted by Vinod Dhandapani at Thursday, October 09, 2008 0 comments
Labels: Backup, RAC
Archive to No Archive Mode
Steps to convert the database from Archivelog Mode to No Archive log mode.
SQL> Alter system set log_archive_start=false scope=spfile;
Database Altered
SQL> Shutdown Immediate
Database Closed
Database Dismounted
Instance Shutdown
SQL> Startup Mount;
Oracle Instance started
.....
110
Database Mounted
SQL> Alter Database NoArchivelog;
Database Altered
SQL> Alter database Open;
Database Altered.
SQL> Archivelog list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 3108
Next log sequence to archive 3109
Current log sequence 3109
Your database is in NoArchivelog Mode.
Posted by Vinod Dhandapani at Thursday, October 09, 2008 0 comments
Labels: Backup
Thursday, September 18, 2008
Configure Archive Log in RAC
The post has been moved to another location. Click Here to access the post.
Posted by Vinod Dhandapani at Thursday, September 18, 2008 0 comments
Labels: Backup, RAC
111
Tuesday, July 22, 2008
Configure Archive Log Mode
Oracle databases are created in NOARCHIVELOG mode. This means that you can only take
consistent cold backups.
In order to perform online backups your database must be running in ARCHIVELOG mode.
Archive log files can also be transmitted and applied to a standby database inDR (Disaster
Recovery) site.
To find out the mode database is running. Login as sys user and issue the following statement.
SQL> archive log list
Database log mode : Archive Mode
Automatic archival : Enabled
Archive destination : /cgildb/oradata/archive
Oldest online log sequence : 3413
Next log sequence to archive : 3415
Current log sequence : 3415
You can also issue the following statement.
SQL> Select log_mode from v$database;
LOG_MODE
--------------Archivelog
112
Convert NoArchive to ArchiveLog mode
Step 1: Set the following Initialization Parameter
1. Log_archive_dest_1 = 'LOCATION=/export/prodarch' MANDATORY REOPEN
2. Log_archive_dest_2 = 'SERVICE=stbyprod' OPTIONAL REOPEN=60
3. Log_archive_format = 'prod_%t_%r_%s.arc'
4. Log_archive_max_processes= 2
5. Log_archive_min_succeed_dest = 1
6. Log_archive_start= True (deprecated from oracle 10g).
Step 2:
SQL> Shutdown Immediate
Database Closed
Database Dismounted
Instance Shutdown
SQL> Startup Mount
Oracle Instance started
.....
Database Mounted
SQL> Alter Database Archivelog;
Database Altered
113
SQL> Alter Database Open;
Database Altered.
Thats it..Your database is in Archivelog mode now..
About Related Initialization Parameter
1. Log_archive_dest_n : n = 1 to 10. The destinationcan be on a local machine or a remote
machine (Standby). The syntax is,
log_archive_dest_n = 'null' Service = service_name (from tnsnames) Location = directory
[Mandatory Optional] [Reopen [= integer]]
where,
Mandatory - archiving must succeed at this destination.
Reopen - (time in secs) retries to archive when it did not succeed the first time. (default 300)
2. Log_archive_format: Format of the archived log filenames. Use any text in combination with
any of the substitution variable to uniquely name the file. The substitution variables are,
%s - Log Sequence Number
%t - Thread Number
%r - Resetlogs ID (Ensure uniqueness when you reset the log sequence number)
%d - Database ID.
3. Log_archive_dest_state_n: (Enable Defer alternate reset)
Enable - available (default)
Defer - temporarily unavailable
114
Alternate - Initially deferred. Available when there is a failure in the primary destination.
4. Log_archive_max_processes : Maximum number of active archive processes. default 2. Range
1 - 10.
5. Log_archive_min_succeed_dest: Minimum number of destinations that arcn must write
before overwriting the filled in online redo log file. Default - 1.
6. Log_archive_duplex_dest: When you use this parameter you can archive to a maximum of
two destination. The first destination is given by log_archive_dest parameter and the second
destination by this parameter.
7. Log_archive_start: Set this parameter to true to enable the archiver starts when you restart
the instance. This parameter is not dynamic, so you must restart the instance to take effect.
SQL> alter system set log_archive_start=true scope=spfile;
SQL> Shutdown Immediate;
To enable automatic archiving with out shutting down the instance,
SQL> Alter system archive log start;
--Change the log_archive_start to true during next startup.
In 10g this parameter has been deprecated. By enabling the the archive log mode, the archiver
will start automatically.
"@"
115
Run a command file.
"@@"
Run a command file in the same directory as another command file that is currently running.
The @@ command differs from the @ command only when run from within a command file.
"ALLOCATE CHANNEL"
Establish a channel, which is a connection between RMAN and a database instance.
"ALLOCATE CHANNEL FOR MAINTENANCE"
Allocate a channel in preparation for issuing maintenance commands such as DELETE.
"allocOperandList"
A subclause that specifies channel control options such as PARMS and FORMAT.
"ALTER DATABASE"
116
Mount or open a database.
"archivelogRecordSpecifier"
Specify a range of archived redo logs files.
"BACKUP"
Back up database files, copies of database files, archived logs, or backup sets.
"BLOCKRECOVER"
Recover an individual data block or set of data blocks within one or more datafiles.
"CATALOG"
Add information about a datafile copy, archived redo log, or control file copy to the repository.
117
"CHANGE"
Mark a backup piece, image copy, or archived redo log as having the status UNAVAILABLE or
AVAILABLE; remove the repository record for a backup or copy; override the retention policy for
a backup or copy.
"completedTimeSpec"
Specify a time range during which the backup or copy completed.
"CONFIGURE"
Configure persistent RMAN settings. These settings apply to all RMAN sessions until explicitly
changed or disabled.
"CONNECT"
Establish a connection between RMAN and a target, auxiliary, or recovery catalog database.
"connectStringSpec"
118
Specify the username, password, and net service name for connecting to a target, recovery
catalog, or auxiliary database. The connection is necessary to authenticate the user and identify
the database.
"CONVERT"
Converts datafile formats for transporting tablespaces across platforms.
"CREATE CATALOG"
Create the schema for the recovery catalog.
"CREATE SCRIPT"
Create a stored script and store it in the recovery catalog.
"CROSSCHECK"
Determine whether files managed by RMAN, such as archived logs, datafile copies, and backup
pieces, still exist on disk or tape.
119
"datafileSpec"
Specify a datafile by filename or absolute file number.
"DELETE"
Delete backups and copies, remove references to them from the recovery catalog, and update
their control file records to status DELETED.
"DELETE SCRIPT"
Delete a stored script from the recovery catalog.
"deviceSpecifier"
Specify the type of storage device for a backup or copy.
"DROP CATALOG"
120
Remove the schema from the recovery catalog.
"DROP DATABASE"
Deletes the target database from disk and unregisters it.
"DUPLICATE"
Use backups of the target database to create a duplicate database that you can use for testing
purposes or to create a standby database.
"EXECUTE SCRIPT"
Run an RMAN stored script.
"EXIT"
Quit the RMAN executable.
"fileNameConversionSpec"
121
Specify patterns to transform source to target filenames during BACKUP AS COPY, CONVERT and
DUPLICATE.
"FLASHBACK"
Returns the database to its state at a previous time or SCN.
"formatSpec"
Specify a filename format for a backup or copy.
"HOST"
Invoke an operating system command-line subshell from within RMAN or run a specific
operating system command.
"keepOption"
Specify that a backup or copy should or should not be exempt from the current retention policy.
122
"LIST"
Produce a detailed listing of backup sets or copies.
"listObjList"
A subclause used to specify which items will be displayed by the LIST command.
"maintQualifier"
A subclause used to specify additional options for maintenance commands such as DELETE and
CHANGE.
"maintSpec"
A subclause used to specify the files operated on by maintenance commands such as CHANGE,
CROSSCHECK, and DELETE.
"obsOperandList"
123
A subclause used to determine which backups and copies are obsolete.
"PRINT SCRIPT"
Display a stored script.
"QUIT"
Exit the RMAN executable.
"recordSpec"
A subclause used to specify which objects the maintenance commands should operate on.
"RECOVER"
Apply redo logs and incremental backups to datafiles restored from backup or datafile copies, in
order to update them to a specified time.
"REGISTER"
124
Register the target database in the recovery catalog.
"RELEASE CHANNEL"
Release a channel that was allocated with an ALLOCATE CHANNEL command.
"releaseForMaint"
Release a channel allocated with an ALLOCATE CHANNEL FOR MAINTENANCE command.
"REPLACE SCRIPT"
Replace an existing script stored in the recovery catalog. If the script does not exist, then
REPLACE SCRIPT creates it.
"REPORT"
Perform detailed analyses of the content of the recovery catalog.
"RESET DATABASE"
125
Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has been executed
and that a new incarnation of the target database has been created, or reset the target
database to a prior incarnation.
"RESTORE"
Restore files from backup sets or from disk copies to the default or a new location.
"RESYNC"
Perform a full resynchronization, which creates a snapshot control file and then copies any new
or changed information from that snapshot control file to the recovery catalog.
"RUN"
Execute a sequence of one or more RMAN commands, which are one or more statements
executed within the braces of RUN.
"SEND"
126
Send a vendor-specific quoted string to one or more specific channels.
"SET"
Sets the value of various attributes that affect RMAN behavior for the duration of a RUN block
or a session.
"SHOW"
Displays the current CONFIGURE settings.
"SHUTDOWN"
Shut down the target database. This command is equivalent to the SQL*Plus SHUTDOWN
command.
"SPOOL"
Write RMAN output to a log file.
"SQL"
127
Execute a SQL statement from within Recovery Manager.
"STARTUP"
Start up the target database. This command is equivalent to the SQL*Plus STARTUP command.
"SWITCH"
Specify that a datafile copy is now the current datafile, that is, the datafile pointed to by the
control file. This command is equivalent to the SQL statement ALTER DATABASE RENAME FILE
as it applies to datafiles.
"UNREGISTER DATABASE"
Unregisters a database from the recovery catalog.
"untilClause"
A subclause specifying an upper limit by time, SCN, or log sequence number. This clause is
128
usually used to specify the desired point in time for an incomplete recovery.
"UPGRADE CATALOG"
Upgrade the recovery catalog schema from an older version to the version required by the
RMAN executable.
"VALIDATE"
Examine a backup set and report whether its data is intact. RMAN scans all of the backup pieces
in the specified backup sets and looks at the checksums to verify that the contents can be
successfully restored.
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
RMAN INTERVIEW QUESTIONS
******************************************************************************
******************************************************************************
**********
******************************************************************************
129
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
1. What is RMAN ?
Recovery Manager (RMAN) is a utility that can manage your entire Oracle backup and recovery
activities.
1.1>Which Files can be backed up using rman?
Database Files (with RMAN)
Control Files (with RMAN)
Offline Redolog Files (with RMAN)
INIT.ORA (manually)
Password Files (manually)
2. When you take a hot backup putting Tablespace in begin backup mode, Oracle records SCN
# from header of a database file. What happens when you issue hot backup database in RMAN
at block level backup? How does RMAN mark the record that the block has been backed up ?
How does RMAN know what blocks were backed up so that it doesn't have to scan them again?
In 11g, there is Oracle Block Change Tracking feature. Once enabled; this new 10g feature
records the modified since last backup and stores the log of it in a block change tracking file.
During backups RMAN uses the log file to identify the specific blocks that must be backed up.
This improves RMAN's performance as it does not have to scan whole datafiles to detect
changed blocks.
130
Logging of changed blocks is performed by the CTRW process which is also responsible for
writing data to the block change tracking file. RMAN uses SCNs on the block level and the
archived redo logs to resolve any inconsistencies in the datafiles from a hot backup. What
RMAN does not require is to put the tablespace in BACKUP mode, thus freezing the SCN in the
header. Rather, RMAN keeps this information in either your control files or in the RMAN
repository (i.e., Recovery Catalog).
3. What are the Architectural components of RMAN?
1.RMAN executable
2.Server processes
3.Channels
4.Target database
5.Recovery catalog database (optional)
6.Media management layer (optional)
7.Backups, backup sets, and backup pieces
4. What are Channels?
A channel is an RMAN server process started when there is a need to communicate with an I/O
device, such as a disk or a tape. A channel is what reads and writes RMAN backup files. It is
through the allocation of channels that you govern I/O characteristics such as:
* Type of I/O device being read or written to, either a disk or an sbt_tape
* Number of processes simultaneously accessing an I/O device
131
* Maximum size of files created on I/O devices
* Maximum rate at which database files are read
* Maximum number of files open at a time
5. Why is the catalog optional?
Because RMAN manages backup and recovery operations, it requires a place to store necessary
information about the database. RMAN always stores this information in the target database
control file. You can also store RMAN metadata in a recovery catalog schema contained in a
separate database. The recovery catalog
schema must be stored in a database other than the target database.
6. What does complete RMAN backup consist of ?
A backup of all or part of your database. This results from issuing an RMAN backup command. A
backup consists of one or more backup sets.
7. What is a Backup set?
A logical grouping of backup files -- the backup pieces -- that are created when you issue an
RMAN backup command. A backup set is RMAN's name for a collection of files associated with a
backup. A backup set is composed of one or more backup pieces.
8. What is a Backup piece?
132
A physical binary file created by RMAN during a backup. Backup pieces are written to your
backup medium, whether to disk or tape. They contain blocks from the target database's
datafiles, archived redo log files, and control files. When RMAN constructs a backup piece from
datafiles, there are a several rules that it follows:
# A datafile cannot span backup sets
# A datafile can span backup pieces as long as it stays within one backup set
# Datafiles and control files can coexist in the same backup sets
# Archived redo log files are never in the same backup set as datafiles or control files RMAN is
the only tool that can operate on backup pieces. If you need to restore a file from an RMAN
backup, you must use RMAN to do it. There's no way for you to manually reconstruct database
files from the backup pieces. You must use RMAN to restore files from a backup piece.
9. What are the benefits of using RMAN?
1. Incremental backups that only copy data blocks that have changed since the last backup.
2. Tablespaces are not put in backup mode, thus there is noextra redo log generation during
online backups.
3. Detection of corrupt blocks during backups.
4. Parallelization of I/O operations.
5. Automatic logging of all backup and recovery operations.
6. Built-in reporting and listing commands.
10. RMAN Restore Preview
The PREVIEW option of the RESTORE command allows you to identify the backups required to
complete a specific restore operation. The output generated by the command is in the same
format as the LIST command. In addition the PREVIEW SUMMARY command can be used to
produce a summary report with the same format as the LIST SUMMARY command. The
following examples show how these commands are used:
133
# Spool output to a log file
SPOOL LOG TO c:\oracle\rmancmd\restorepreview.lst;
# Show what files will be used to restore the SYSTEM tablespace’s datafile
RESTORE DATAFILE 2 PREVIEW;
# Show what files will be used to restore a specific tablespace
RESTORE TABLESPACE users PREVIEW;
# Show a summary for a full database restore
RESTORE DATABASE PREVIEW SUMMARY;
# Close the log file
SPOOL LOG OFF;
11. Where should the catalog be created?
The recovery catalog to be used by rman should be created in a separate database other than
the target database. The reason been that the target database will be shutdown while datafiles
are restored.
12. How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
13. How to view the current defaults for the database.
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
134
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u02/app/oracle/product/10.1.0/db_
1/dbs/snapcf_test.f’; # default
14. Backup the database.
run
{
backup incremental level $level ${level_keyword}
tag INC${target_db}_$level database include current controlfile;
backup archivelog all not backed up 1 times delete input;
}
15. How to resolve the ora-19804 error
Basically this error is because of flash recovery area been full. One way to solve is to increase
the space available for flashback database.
sql>ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=5G; –It can be set to K,M or G.
rman>backup database;
……………….
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 04-JUL-05
channel ORA_DISK_1: finished piece 1 at 04-JUL-05
piece handle=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_04/o1
135
_mf_ncsnf_TAG20050704T205840_1dmy15cr_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 04-JUL-05
Oracle Flashback
After taking a back up resync the database.
Restoring the whole database.
run {
shutdown immediate;
startup mount;
restore database;
recover database;
alter database open;
}
16. What are the various reports available with RMAN
rman>list backup;
rman> list archive;
17. What does backup incremental level=0 database do?
Backup database level=0 is a full backup of the database. rman>>backup incremental level=0
database;
You can also use backup full database; which means the same thing as level=0;
18. What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive
the files, when a command is issued through rman to backup archivelogs it uses one of the
location to backup the data. When we specify delete input the location which was backedup will
get deleted, if we specify delete all all log_archive_dest_n will get deleted.
DELETE all applies only to archived logs. delete expired archivelog all;
136
19. How do I backup archive log?
In order to backup archivelog we have to do the following:run
{
allocate channel t1 type 'SBT_TAPE';
delete noprompt archivelog until time = 'sysdate-3/24';
delete noprompt obsolete;
release channel t1;
}
20. How do I do a incremental backup after a base backup?
run
{
backup incremental level $level ${level_keyword}
tag INC${target_db}_$level database include current controlfile;
backup archivelog all not backed up 1 times delete input;
}
21. In catalog database, if some of the blocks are corrupted due to system crash, How will you
recover?
using RMAN BLOCK RECOVER command
22. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
You have to catalog that manual backup in RMAN's repository by command
RMAN> catalog datafilecopy '/DB01/BACKUP/users01.dbf';
137
restrictions:
> Accessible on disk
> A complete image copy of a single file
23. Where RMAN keeps information of backups if you are using RMAN without Catalog?
RMAN keeps information of backups in the control file.
CATALOG vs NOCATALOG
the difference is only who maintains the backup records like when is the last successful backup
incremental differential etc.
In CATALOG mode another database (TARGET database) stores all the information.
In NOCATALOG mode controlfile of Target database is responsible.
24. How do you see information about backups in RMAN?
RMAN> List Backup;
Use this SQL to check
SQL> SELECT sid totalwork sofar FROM v$session_longops WHERE sid 153;
Here give SID when back start it will show SID
25. How RMAN improves backup time?
RMAN backup time consumption is very less than compared to regular online backup as RMAN
copies only modified blocks
26. What is the advantage of RMAN utility?
Central Repository
Incremental Backup
138
Corruption Detection
Advantage over tradition backup system:
1). copies only the filled blocks i.e. even if 1000 blocks is allocated to datafile but 500 are filled
with data then RMAN will only create a backup for that 500 filled blocks.
2). incremental and accumulative backup.
3). catalog and no catalog option.
4). detection of corrupted blocks during backup;
5). can create and store the backup and recover scripts.
6). increase performance through automatic parallelization( allocating channels) less redo
generation.
27. List the encryption options available with RMAN?
RMAN offers three encryption modes: transparent mode, password mode and dual mode.
28. What are the steps required to perform in $ORACLE_HOME for enabling the RMAN backups
with netbackup or TSM tape library software?
I can explain what are all the steps to take a rman backup with TSM tape library as follows
1.Install TDPO (default path /usr/tivoli/tsm/client/oracle/)
2.Once u installed the TDPO automatically one link is created from TDPO directory to
/usr/lib.Now we need to Create soft link between OS to ORACLE_HOME
ln -s /usr/lib/libiobk64.a $ORACLE_HOME/lib/libobk.a(very imporatant)
3.Uncomment and Modify tdpo.opt file which in
/usr/tivoli/tsm/client/oracle/bin/tdpo.opt as follows
DSMI_ORC_CONFIG /usr/Tivoli/tsm/client/oracle/bin64/dsm.opt
DSMI_LOG /home/tmp/oracle
TDPO_NODE backup
139
TDPO_PSWDPATH /usr/tivoli/tsm/client/oracle/bin64
4.create dsm.sys file in same path and add the entries
SErvername <Server name >
TCPPort 1500
passwordacess prompt
nodename backup
enablelanfree yes
TCPSERVERADDRESS <Server Address>
5.Create dsm.opt file add an entry
SErvername <Server name >
6.Then take backup
RMAN>run
{
allocate channel t1 type 'sbt_tape' parms
'ENV (TDPO_OPTFILE /usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
backup database include current controlfile;
release channel t1;
}
29. What is the significance of incarnation and DBID in the RMAN backups?
When you have multiple databases you have to set your DBID (Database Id) which is unique to
each database. You have to set this before you do any restore operation from RMAN.
There is possibility that incarnation may be different of your database. So it is advised to reset
to match with the current incarnation. If you run the RMAN command ALTER DATABASE OPEN
RESETLOGS then RMAN resets the
target database automatically so that you do not have to run RESET DATABASE. By resetting the
140
database RMAN considers the new incarnation as the current incarnation of the database.
30. List at least 6 advantages of RMAN backups compare to traditional hot backups?
RMAN has the following advantages over Traditional backups:
1. Ability to perform INCREMENTAL backups
2. Ability to Recover one block of datafile
3. Ability to automatically backup CONTROLFILE and SPFILE
4. Ability to delete the older ARCHIVE REDOLOG files, with the new one's automatically.
5. Ability to perform backup and restore with parallelism.
6. Ability to report the files needed for the backup.
7. Ability to RESTART the failed backup, without starting from beginning.
8. Much faster when compared to other TRADITIONAL backup strategies.
31. How do you enable the autobackup for the controlfile using RMAN?
issue command at rman prompt.....
RMAN> configure controlfile autobackup on;
also we can configure controlfile backup format......
RMAN> configure controlfile autobackup format for device type disk to
2> '$HOME/BACKUP/RMAN/ F.bkp';
$HOME/BACKUP/RMAN/ this can be any desired location.
32. How do you identify what are the all the target databases that are being backed-up with
RMAN database?
You don’t have any view to identify whether it is backed up or not . The only option is connect
to the target database and give list backup this will give you the backup information with date
141
and timing.
33. What is the difference between cumulative incremental and differential incremental
backups?
Differential backup: This is the default type of incremental backup which backs up all blocks
changed after the most recent backup at level n or lower.
Cumulative backup: Backup all blocks changed after the most recent backup at level n-1 or
lower.
34. How do you identify the block corruption in RMAN database? How do you fix it?
using v$block_corruption view u can find which blocks corrupted.
Rman>> block recover datafile <fileid> block <blockid>;
Using the above statement u recover the corrupted blocks.
First check whether the block is corrupted or not by using this command
sql>select file# block# from v$database_block_corruption;
file# block
2 507
the above block is corrupted...
conn to Rman
To recover the block use this command...
Rman>blockrecover dataile 2 block 507;
the above command recover the block 507
Now just verify it.....
Rman>blockrecover corruption list;
35. How do you clone the database using RMAN software? Give brief steps? When do you use
142
crosscheck command?
Check whether backup pieces proxy copies or disk copies still exist.
Two commands available in RMAN to clone database:
1) Duplicate
2) Restore.
36. What is the difference between obsolete RMAN backups and expired RMAN backups?
The term obsolete does not mean the same as expired. In short obsolete means "not needed "
whereas expired means "not found."
37. List some of the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION RC_BACKUP_COPY_DETAILS
RC_BACKUP_CORRUPTION
RC_BACKUP-DATAFILE_SUMMARY to name a few
38. What is db_recovery_file_dest ? When do you need to set this value?
If Database Flashback option is on then use this option.
39. How do you setup the RMAN tape backups?
RMAN Target /
run
{
Allocate channel ch1 device type sbt_tape maxpiecesize 4g
Format' D_ U_ T_ t';
sql 'alter system switch logfile';
143
Backup database;
backup archivelog from time 'sysdate-7';
Backup Format ' D_CTLFILE_P_ U_ T_ t' Current controlfile;
release channel ch1;
}
This is backup script for Tivoli Backup Server
40. How do you install the RMAN recovery catalog?
Steps to be followed:
1) Create connection string at catalog database.
2) At catalog database create one new user or use existing user and give that user a
recovery_catalog_owner privilege.
3)Login into RMAN with connection string
a) export ORACLE_SID
b) rman target catalog @connection string
4) rman> create catalog;
5) register database;
41. When do you recommend hot backup? What are the pre-reqs?
Database must be Archivelog Mode
Archive Destination must be set and LOG_ARCHIVE_START TRUE (EARLIER VERSION BEFORE
10G)
If you go through RMAN then
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
144
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'/u01/oracle/autobackup/ F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'/u01/app/oracle/product/10.2.0/db_2/dbs/snapcf_dba.f'; # default
42. What is the difference between physical and logical backups?
In Oracle Logical Backup is "which is taken using either Traditional Export/Import or Latest Data
Pump". Where as Physical backup is known "when you take Physical O/s Database related Files
as Backup".
43. What is RAID? What is RAID0? What is RAID1? What is RAID 10?
RAID: It is a redundant array of independent disk
RAID0: Concatenation and stripping
RAID1: Mirroring
44. What are things which play major role in designing the backup strategy?
I Believe in designing a good backup strategy it will not only be simply backup but also a
145
contingency plan. In this case you should consider the following:
1. How long is the allowable down time during recovery? - If short you could consider using
dataguard.
2. How long is the backup period? - If short I would advise to use RMAN instead of user
managed backup.
3. If limited disk space for backup never use user managed backup.
4. If the database is large you could consider doing full rman backups on a weekend and do a
incremental backup on a weekday.
5. Schedule your backup on the time where there is least database activity this is to avoid
resource huggling.
6. Backup script should always be automized via scheduled jobs. This way operators would
never miss a backup period.
7. Retention period should also be considered. Try keeping atleast 2 full backups. (current and
previous backup).
Cold backup: shutdown the database and copy the datafiles with the help of
O.S. command. this is simply copying of datafiles just like any other text file copy.
Hot backup: backup process starts even though database in running. The process to take a hot
backup is
1) sql> alter database begin backup;
2) copy the datafiles.
3) after copying
sql> alter database end backup;
Begin backup clause will generate the timestamp. it'll be used in backup consistency i.e. when
begin backup pressed it'll generate the timestamp. During restore database will restore the data
from backup till that timestamp and remaining backup will be recovered from archive log.
45. What is hot backup and what is cold backup?
Hot backup when the database is online cold backup is taken during shut down period
146
46. How do you test that your recovery was successful?
SQL> SELECT count(*) FROM flashback_table;
47. How do you backup the Flash Recovery Area?
A:RMAN> BACKUP RECOVERY FILES;
The files on disk that have not previously been backed up will be backed up. They are full and
incremental backup sets, control file auto-backups, archive logs and datafile copies.
48. How to enable Fast Incremental Backup to backup only those data blocks that have
changed?
A:SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;
49. How do you set the flash recovery area?
A:SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 100G;SQL> ALTER SYSTEM SET
db_recovery_file_dest = ‘/u10/oradata/school’;
50. How can you use the CURRENT_SCN column in the V$DATABASE view to obtain the
currentSCN?
A:SQL> SELECT current_scn FROM v$database;
51. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
You have to catalog that manual backup in RMAN's repository by command
RMAN> catalogdatafilecopy '/DB01/BACKUP/users01.dbf';
restrictions:> Accessible on disk> A complete image copyof a single file
147
52. In catalog database, if some of the blocks are corrupted due to system crash, How will you
recover?
using RMAN BLOCK RECOVER command
53. List advantages of RMAN backups compare to traditional hot backups?
RMAN has the following advantages over Traditional backups:
1. Ability to perform INCREMENTALbackups
2. Ability to Recover one block of datafile
3. Ability to automatically backup CONTROLFILEand SPFILE
4. Ability to delete the older ARCHIVE REDOLOG files
54. How do you identify the expired, active, obsolete backups? Which RMAN command you
use?
Use command:
Rman > crosscheck backup;
Rman > crosscheck archivelog all;
Rman > listbackup;
Rman > list archive logall
55. How do you enable the autobackup for the controlfile using RMAN?
RMAN> configure controlfile autobackup on;
also we can configurecontrolfile backup format......
RMAN> configure control file auto backup format for device type disk
56. How do you identify what are the all the target databases that are being backed-up with
148
RMAN database?
You don’t have any view to identify whether it is backed up or not . The only option is connect
to the target database and give list backup, this will give you the backup information with date
and timing
57. What is the difference between cumulative incremental and differential incremental
backups?
Differential backup: This is the default type of incremental backup which backs up all blocks
changed after the most recent backup at level n or lower.
Cumulative backup: Backup all blocks changed after the most recent backup at level n-1 or
lower
58. Explain how to setup the physical stand by database with RMAN?
$ Export ORACLE_SID=TEST $ rman target /
RMAN> show all;
Using target database controlfile instead of recovery catalog RMAN configuration parameters
are:
CONFIGURE RETENTIONPOLICY TO RECOVERY WINDOW OF 1 DAYS;
CONFIGURE BACKUP OPTIMIZATION
59. What is auxiliary channel in RMAN? When do you need this?
An auxiliary channel is a link to auxiliary instance. If you do not have automatic channels
configured, then before issuing the DUPLICATE command, manually allocate at least one
auxiliary channel with in the same RUN command.
60. What is backup set?
RMAN can also store its backups in an RMAN-exclusive format which is called backup set. A
backupset is a collection of backup pieces, each of which may contain one or more datafile
149
backups
61. What is RMAN and how does one use it?
Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and
recoveringOracle Databases. RMAN ships with the database server and doesn't require a
separate installation. TheRMAN executable is located in your ORACLE_HOME/bin directory.
62. What kind of backup are supported by RMAN?
Backup SetsDatafiles CopiesOS BackupWhat is the Flash Recovery Area?
A: It is a unified storage location for all recovery-related files and activities in an Oracle
Database. Itincludes Control File, Archived Log Files, Flashback Logs, Control File Autobackups,
Data Files, andRMAN files.
63. How do you define a Flash Recovery Area?
A: To define a Flash Recovery Area set the following Oracle Initialization Parameters.
SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 100G;
SQL> ALTER SYSTEM SET db_recovery_file_dest = ‘/u10/oradata/school’;
64. How do you use the V$RECOVERY_FILE_DEST view to display information regarding the
flashrecovery area?
A:SQL> SELECT name, space_limit, space_used,space_reclaimable, number_of_filesFROM v
$recovery_file_dest;
65. How can you display warning messages?
A:SQL> SELECT object_type, message_type,message_level, reason, suggested_actionFROM
dba_outstanding_alerts;
150
66. How to use the best practice to use Oracle Managed File (OMF) to let Oracle database to
create andmanage the underlying operating system files of a database?
A:SQL> ALTER SYSTEM SETdb_create_file_dest = ‘/u03/oradata/school’;
SQL> ALTER SYSTEM SETdb_create_online_dest_1 = ‘/u04/oradata/school’;
67. How to enable Fast Incremental Backup to backup only those data blocks that have
changed?
A:SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;
68. How do you monitor block change tracking?
A:SQL> SELECT filename, status, bytes FROM v$block_change_tracking;
It shows where the block change-tracking file is located, the status of it and the size.
69. How do you use the V$BACKUP_DATAFILE view to display how effective the block change
trackingis in minimizing the incremental backup I/O?
A:SQL> SELECT file#, AVG(datafile_blocks), AVG(blocks_read),AVG
(blocks_read/datafile_blocks), AVG(blocks)FROM v$backup_datafileWHERE
used_change_tracking = ‘YES’ AND incremental_level > 0GROUP BY file#;If the AVG
(blocks_read/datafile_blocks) column is high then you may have to decrease the timebetween
the incremental backups.
70. How do you backup the entire database?
A:RMAN> BACKUP DATABASE;
71. How do you backup an individual tablespaces?
A:RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
RMAN> BACKUP TABLESPACE system;
151
72. How do you backup datafiles and control files?
A:RMAN> BACKUP DATAFILE 3;
RMAN> BACKUP CURRENT CONTROLFILE;
Use a fast recovery without restoring all backups from their backup location to the location
specified inthe controlfile.
A:RMAN> SWITCH DATABASE TO COPY;
73. RMAN will adjust the control file so that the data files point to the backup file location and
then starts recovery.Why use Rman ?
A. 1. No Extra Costs.. It is available free.
2.RMAN introduced in Oracle 8 it has become simpler with new version and easier that user
managed backups.
3.Proper Security
4.You are 100% sure your database has been backed up .
5.It contains details of backup taken in the central repository
6.Facility of Testing validity of backups also command like cross check to checkthe status of
backup.
7.Oracle 10g has got further optimized incremental backups with has resulted inimprovement
of performance during backup
8.and recovery time
9.Parrallel operation are supported
10.Better Querying facility for knowing different details of backup.
11.No Extra redo generated when backup is taken. compared to online backup
12.Without rman.which results in saving of space in hard disk.
13.RMAN is an intelligent tool
152
14.Maintains repository of backup metadata.
15.Remembers backup locations
16.Knows what needs backup set locations
17.Knows what needs to be backed up
18.Knows what is required for recovery
19.Know what backups are redundant
20.It handles database corruptions
74. Oracle Enhancement for Rman in 10g
A. 1.Flash Recovery Area
2.Incrementally Updated Backups
3.Faster Incremental Backups
4.SWITCH DATABASE COMMAND.
5.Binary Compression
6.Global Scripting
7.Duration Clause
8.Configure This
9.Oracle Enhancement for Rman in 10g
10.Automatic Channel Failover
11.Compress Backup Sets
12.Recovery Through Reset Logs
13.Cross Backup Sets
75. Global Scripting
153
A.RMAN> print script full_backup to file 'my_script_file.txt'
Oracle Database 10g provides a new concept of global scripts, which you can executeagainst
any database registered in the recovery catalog, as long as your RMAN client isconnected to the
recovery catalog and a target database simultaneously.CPISOLUTION.COM
RMAN> create global script global_full_backup
76. Outline the steps for recovery of missing data file?
Losing Datafiles Whenever you are in NoArchivelog Mode:
###################################################
If you are in noarchivelog mode and you loss any datafile then whether it is temporary or
permanent media failure, the database will automatically shut down. If failure is temporary
then correct the underline hardware and start the database. Usually crash recovery will perform
recovery of the committed transaction of the database from online redo log files. If you have
permanent media failure then restore a whole database from a good backup. How to restore a
database is as follows:
If a media failure damages datafiles in a NOARCHIVELOG database, then the only option for
recovery is usually to restore a consistent whole database backup. As you are in noarchivelog
mode so you have to understand that changes after taken backup is lost.
If you logical backup that is export file you can import that also.
In order to recover database in noarchivelog mode you have to follow the following procedure.
1)If the database is open shutdown it.
SQL>SHUTDOWN IMMEDIATE;
154
2)If possible, correct the media problem so that the backup database files can be restored to
their original locations.
3)Copy all of the backup control files, datafiles to the default location if you corrected media
failure. However you can restore to another location. Remember that all of the files not only
the damaged files.
4)Because online redo logs are not backed up, you cannot restore them with the datafiles and
control files. In order to allow the database to reset the online redo logs, you must have to do
incomplete recovery:
RECOVER DATABASE UNTIL CANCEL
CANCEL
5)Open the database in RESETLOGS mode:
ALTER DATABASE OPEN RESETLOGS;
In order to rename your control files or in case of media damage you can copy it to another
location and then by setting (if spfile)
STARTUP NOMOUNT
alter system set
control_files='+ORQ/orq1/controlfile/control01.ctl','+ORQ/orq1/controlfile/control02.ctl'
scope=spfile;
STARTUP FORCE MOUNT;
In order to rename data files or online redo log files first copy it to new location and then point
control file to new location by,
ALTER DATABASE RENAME FILE '+ORQ/orq1/datafile/system01.dbf';'
155
TO '+ORQ/orq1/datafile/system02.dbf';
Losing Datafiles Whenever you are in Archivelog Mode:
###################################################
If the datafile that is lost is under SYSTEM tablespace or if it is a datafile contain active undo
segments then database shuts down. If the failure is temporary then correct the underline
hardware and start the database. Usually crash recovery will perform recovery of the
committed transaction of the database from online redo log files.
If the datafile that is lost in not under SYSTEM tablespace and not contain active undo segments
then the affected datafile is gone to offline. The database remains open. In order to fix the
problem take the affected tablespace offline and then recover the tablespace.
77. Outline the steps for recovery with missing online redo logs?
Redo log is CURRENT (DB was shut down cleanly)
If the CURRENT redo log is lost and if the DB is closed consistently, OPEN RESETLOGS can be
issued directly without any transaction loss. It is advisable to take a full backup of DB
immediately after the STARTUP.
Redo log is CURRENT (DB was not shut down cleanly)
When a current redo log is lost, the transactions in the log file are also lost before making to
archived logs. Since a DB startup can no more perform a crash recovery (since all the nowavailable online log files are not sufficient to startup the DB in consistent state), an incomplete
media recovery is the only option. We will need to restore the DB from a previous backup and
restore to the point just before the lost redo log file. The DB will need to be opened in
RESETLOGS mode. There is some transaction loss in this scenario.
RMAN> RESTORE CONTROLFILE FROM '<backup tag location>';
RMAN> ALTER DATABASE MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE UNTIL TIME "to_date('MAR 05 2009 19:00:00','MON DD YYYY
156
HH24:MI:SS')";
RMAN> ALTER DATABASE OPEN RESETLOGS;
78. Outline steps for recovery with missing archived redo logs?
If a redo log file is already archived, its loss can safely be ignored. Since all the changes in the DB
are now archived and the online log file is only waiting for its turn to be re-written by LGWR
(redo log files are written circularly) the loss of the redo log file doesnt matter much. It may be
re-created using the command
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP <group#>;
This will re-create all groups and no transactions are lost. The database can be opened normally
after this.
79. What is FRA ? When do you use this ?
Flash recovery area where you can store not only the traditional components found in a backup
strategy such as control files, archived log files, and Recovery Manager (RMAN) datafile copies
but also a number of other file
components such as flashback logs. The flash recovery area simplifies backup operations, and it
increases the availability of the database because many backup and recovery operations using
the flash recovery area can be performed when the database is open and available to users.
Because the space in the flash recovery area is limited by the initialization parameter DB_
RECOVERY_FILE_DEST_SIZE , the Oracle database keeps track of which files are no longer
needed on disk so that they can be deleted when there is not enough free space for new files.
Each time a file is deleted from the flash recovery area, a message is written to the alert log.
A message is written to the alert log in other circumstances. If no files can be deleted, and the
recovery area used space is at 85 percent, a warning message is issued. When the space used is
at 97 percent, a critical warning is
issued. These warnings are recorded in the alert log file, are viewable in the data dictionary
157
view DBA_OUTSTANDING_ALERTS , and are available to you on the main page of the EM
Database Control
80. What is Channel? How do you enable the parallel backups with RMAN?
Channel is a link that RMAN requires to link to target database. This link is required when
backup and recovery operations are performed and recorded. This channel can be allocated
manually or can be preconfigured by using
automatic channel allocation.
The number of allocated channels determines the maximum degree of parallelism that is used
during backup, restore or recovery. For example, if you allocate 4 channels for a backup
operation, 4 background processes for the operation can run concurrently.
Parallelization of backup sets allocates multiple channels and assigns files to specific channels.
You can configure parallel backups by setting a PARALLELISM option of the CONFIGURE
command to a value greater than 1 or by
manually allocating multiple channels.
RMAN> CONFIGURE DEVICE TYPE PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET;
81. What are RTO, MTBF, and MTTR?
RTO: Recovery Time objective-is the maximum amount of time that the database can be
unavailable and still stasfy SLA's
MTBF (Meant tiem Between Failure)MTTR (Mean tie to recover)- fast recovery solutions
82. How do you enable the encryption for RMAN backups?
If you wish to modify your existing backup environment so that all RMAN backups are
158
encrypted, perform the following steps:
· Set up the Oracle Encryption Wallet
· Issue the following RMAN command:
RMAN> CONFIGURE ENCRYPTION ALGORITHM 'AES256'; -- use 256 bit encryption
RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON; -- encrypt backups
83. What is the difference between restoring and recovering?
Restoring involves copying backup files from secondary storage (backup media) to disk. This can
be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can rollforward until a specific point-in-time (before the disaster occurred), or roll-forward until the last
transaction recorded in the log files.
SQL> connect SYS as SYSDBA
SQL> RECOVER DATABASE UNTIL TIME '2001-03-06:16:00:00' USING BACKUP CONTROLFILE;
RMAN> run {
set until time to_date('04-Aug-2004 00:00:00', 'DD-MON-YYYY HH24:MI:SS');
restore database;
recover database;
}
What are the various tape backup solutions available in the market?
How do you generate the begin backup script?
Outline the steps for recovering the full database from cold backup?
Explain the steps to perform the point in time recovery with a backup which is taken before the
resetlogs of the db?
159
Outline the steps involved in TIME based recovery from the full database from hot backup?
Is it possible to take Catalog Database Backup using RMAN? If Yes, How?
Can a schema be restored in oracle 9i RMAN when the schema having numerous table spaces?
Outline the steps for changing the DBID in a cloned environment?
How do you identify the expired, active, obsolete backups? Which RMAN command you use?
Explain how to setup the physical stand by database with RMAN?
List the steps required to enable the RMAN backup for a target database?
How do you verify the integrity of the image copy in RMAN environment?
Outline the steps involved in SCN based recovery from the full database from hot backup?
Outline the steps involved in CANCEL based recovery from the full database from hot backup?
Outline the steps involved in TIME based recovery from the full database from hot backup?
Is it possible to specific tables when using RMAN DUPLICATE feature? If yes, how?
Explain the steps to perform the point in time recovery with a backup which is taken before the
resetlogs of the db?
Outline the steps for recovering the full database from cold backup?
How do you generate the begin backup script?
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
miscellaneous commands
160
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
commands for profiles
create profile profile name of profile limit
sessions_per_user 3
connect_time 1400
idle_time 120
password_life_time 60
password_reuse_time 60
password_reuse_max unlimited
failed_login_attempts 3
password_lock_time unlimited
#Assigning profile to user
alter username profile name of the profile;
#to unlock user account
alter user username account unlock;
161
#drop a profile
drop profile profile name
#You must specify cascade to de-assign the profile from
existing users.
drop profile profile name cascade;
#undo relataed commands
1 Alter system set undo_management = manual
* but this command will fail because this parameter cant be set dyanamically
2 Alter system set undo_tablespace = new_tbs
# Old undo tablespace will be active until last transaction is finished Only one UNDO tablespace
can be active at a time.
3 Alter tablesapce undotbs retention
guarantee/noguarantee;
#DML will get errors and long running queries will get benefited
Log File related commands
------------------------alter database clear logfile group 3;
alter databsae clear unarchived logfile group 3;
archive log list;
alter database archive log current;
alter system switch logfile;
162
alter system checkpoint;
How to create a user
-------------------SQL> create user shekhar
identified by 'Pass!'
default tablespace user_data
quota unlimited on user_data;
How to Change user quota
-----------------------SQL> alter user shekhar
quota 100M on sales_data;
How to Change the password
-------------------------SQL> alter user shekhar
identified by 'Pass1234';
How to unlock a locked user
-------------------------SQL> alter user shekhar
account unlock;
163
SQL> select username, status
from dba_users;
How to lock a user
-------------------------SQL> alter user shekhar
account lock;
How to force a user to change password
-------------------------SQL> alter user shekhar
password expire;
How to create a user with external authetication
---------------------------------------------$ useradd peter <== Should run as root
SQL> cerate user ops$peter
identified externally;
$ whoami <== You should be logged in a peter
$ sqlplus /
SQL> show user
ops$peter
164
How to assign a role to a user
-----------------------------SQL> grant connect to shekhar;
Note: connect is a role already defined by
Oracle like dba, resource etc.
How to work with a role
------------------------SQL> create role salesrep;
SQL> create role salesmgr;
SQL> grant create session to salesrep;
SQL> grant create session to salesmgr;
SQL> grant select on orders to salesrep;
SQL> grant select,insert,update,delete
on orders to salesmgr;
Likewise grant all necessary grants to
the roles that you have created.
SQL> grant salesrep to shekhar;
SQL> grant salesmgr to peter;
165
SQL> grant salesmgr to public;
Where to find information on roles
---------------------------------SQL> select * from dba_roles;
SQL> select * from dba_role_privs;
SQL> select * from dba_sys_privs;
SQL> select * from dba_tab_privs;
How to create a profiles
----------------------Profiles allow us to manage certain
attributes of users connection.
1. connect time
2. idle time
3. no. of sessionss per user
4. Password attributes
a. length of a password
b. when does password expire
b. no. of invalid login attempts
5. resource limits (not recommended)
a. cpu
b. number of blocks read
166
Note: Use resource manager instead
What is a default profile
-----------------------It is a profile whose name is "default"
There are no restrictions put on this
profile. All users have been assigned this
"default" profile.
SQL> create profile pr_sales limit
sessions_per_user 3
connect_time 120
idle_time 10
failed_login_attempt 5
password_verify_function myfunc;
note: A sample function is provided in
a file named:
$ORACLE_HOME/rdbms/admin/utlpwdmg.sql
SQL> alter user shekhar profile pr_sales;
167
WHere is the info for profiles kept
----------------------------------SQL> select * from dba_profiles;
How to find out what profiles as assigned
----------------------------------------SQL> select username, profile
from dba_users;
# tablespace commands
# creating tablespace
create tablespace tablespace name datafile path of datafile size;
#The following statement creates a locally managed tablespace named sumittbs and specifies
AUTOALLOCATE:
CREATE TABLESPACE lmtbsb DATAFILE '/u01/oracle/data/sumittbs01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
#The following example creates a tablespace with uniform 128K extents. (In a database with 2K
blocks, each extent would be equivalent to 64 database blocks). Each 128K extent is
represented by a bit in the extent bitmap for this file.
CREATE TABLESPACE sumittbs DATAFILE '/u02/oracle/data/sumittbs01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;
168
#The following statement creates tablespace sumittbs with automatic segment space
management:
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/sumittbs01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
#Adding a datafile. For example:
ALTER TABLESPACE sumittbs
ADD DATAFILE '/u02/oracle/data/sumittbs02.dbf' SIZE 1M;
#The following statement creates a temporary tablespace in which each extent is 16M. Each
16M extent (which is the equivalent of 8000 blocks when the standard block size is 2K) is
represented by a bit in the bitmap for the file.
CREATE TEMPORARY TABLESPACE sumittemp TEMPFILE '/u02/oracle/data/sumittemp01.dbf'
SIZE 20M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
#The following statements take offline and bring online tempfiles. They behave identically to
the last two ALTER TABLESPACE statements in the previous example.
ALTER DATABASE TEMPFILE '/u02/oracle/data/sumittemp01.dbf' OFFLINE;
ALTER DATABASE TEMPFILE '/u02/oracle/data/sumittemp01.dbf' ONLINE;
169
#The following statement resizes a temporary file:
ALTER DATABASE TEMPFILE '/u02/oracle/data/sumittemp01.dbf' RESIZE 18M;
#The following statement drops a temporary file and deletes the operating system file:
ALTER DATABASE TEMPFILE '/u02/oracle/data/sumittemp02.dbf' DROP
INCLUDING DATAFILES;
#The following example takes the users tablespace offline normally:
ALTER TABLESPACE users OFFLINE NORMAL;
#The following statement makes the flights tablespace read-only:
ALTER TABLESPACE flights READ ONLY;
# renaming tablespace
ALTER TABLESPACE users RENAME TO usersts;
#dropping a tablespace
170
DROP TABLESPACE users INCLUDING CONTENTS AND DATAFILES;
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
all interview questions
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
Here are few interview questions with answers found on the internet. As I don't have time to
format these questions to wiki I am just posting them hoping someone to format them.
Oracle Interview Questions
1. Explain the difference between a hot backup and a cold backup and the benefits associated
with each.
A hot backup is basically taking a backup of the database while it is still up and running and it
must be in archive log mode. A cold backup is taking a backup of the database while it is shut
down and does not require being in archive log mode. The benefit of taking a hot backup is that
171
the database is still available for use while the backup is occurring and you can recover the
database to any point in time. The benefit of taking a cold backup is that it is typically easier to
administer the backup and recovery process. In addition, since you are taking cold backups the
database does not require being in archive log mode and thus there will be a slight performance
gain as the database is not cutting archive logs to disk.
2. You have just had to restore from backup and do not have any control files. How would you
go about bringing up this database?
I would create a text based backup control file, stipulating where on disk all the data files
were and then issue the recover command with the using backup control file clause.
3. How do you switch from an init.ora file to a spfile?
Issue the create spfile from pfile command.
4. Explain the difference between a data block, an extent and a segment.
A data block is the smallest unit of logical storage for a database object. As objects grow they
take chunks of additional storage that are composed of contiguous data blocks. These groupings
of contiguous data blocks are called extents. All the extents that an object takes when grouped
together are considered the segment of the database object.
5. Give two examples of how you might determine the structure of the table DEPT.
Use the describe command or use the dbms_metadata.get_ddl package.
6. Where would you look for errors from the database engine?
172
In the alert log.
7. Compare and contrast TRUNCATE and DELETE for a table.
Both the truncate and delete command have the desired outcome of getting rid of all the
rows in a table. The difference between the two is that the truncate command is a DDL
operation and just moves the high water mark and produces few rollback data. The delete
command, on the other hand, is a DML operation, which will produce rollback data and thus
take longer to complete.
8. Give the reasoning behind using an index.
Faster access to data blocks in a table.
9. Give the two types of tables involved in producing a star schema and the type of data they
hold.
Fact tables and dimension tables. A fact table contains measurements while dimension tables
will contain data that will help describe the fact tables.
10. What type of index should you use on a fact table?
A Bitmap index.
11. Give some examples of the types of database contraints you may find in Oracle and indicate
their purpose.
173
* A Primary or Unique Key can be used to enforce uniqueness on one or more columns.
* A Referential Integrity Contraint can be used to enforce a Foreign Key relationship
between two tables.
* A Not Null constraint - to ensure a value is entered in a column
* A Value Constraint - to check a column value against a specific set of values.
12. A table is classified as a parent table and you want to drop and re-create it. How would you
do this without affecting the children tables?
Disable the foreign key constraint to the parent, drop the table, re-create the table, enable
the foreign key constraint.
13. Explain the difference between ARCHIVELOG mode and NOARCHIVELOG mode and the
benefits and disadvantages to each.
ARCHIVELOG mode is a mode that you can put the database in for creating a backup of all
transactions that have occurred in the database so that you can recover to any point in time.
NOARCHIVELOG mode is basically the absence of ARCHIVELOG mode and has the disadvantage
of not being able to recover to any point in time. NOARCHIVELOG mode does have the
advantage of not having to write transactions to an archive log and thus increases the
performance of the database slightly.
14. What command would you use to create a backup control file?
Alter database backup control file to trace.
15. Give the stages of instance startup to a usable state where normal users may access it.
174
STARTUP NOMOUNT - Instance startup
STARTUP MOUNT - The database is mounted
STARTUP OPEN - The database is opened
16. What column differentiates the V$ views to the GV$ views and how?
The INST_ID column which indicates the instance in a RAC environment the information came
from.
17. How would you go about generating an EXPLAIN plan?
Create a plan table with utlxplan.sql.
Use the explain plan set statement_id = 'tst1' into plan_table for a SQL statement
Look at the explain plan with utlxplp.sql or utlxpls.sql
18. How would you go about increasing the buffer cache hit ratio?
Use the buffer cache advisory over a given workload and then query the v$db_cache_advice
table. If a change was necessary then I would use the alter system set db_cache_size command.
19. Explain an ORA-01555.
You get this error when you get a snapshot too old within rollback. It can usually be solved by
increasing the undo retention or increasing the size of rollbacks. You should also look at the
logic involved in the application getting the error message.
175
20. Explain the difference between $ORACLE_HOME and $ORACLE_BASE.
ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE
is where the oracle products reside.
Oracle Interview Questions
1. How would you determine the time zone under which a database was operating?
SELECT dbtimezone FROM DUAL;
2. Explain the use of setting GLOBAL_NAMES equal to TRUE.
It ensure the use of consistent naming conventions for databases and links in a networked
environment.
3. What command would you use to encrypt a PL/SQL application?
WRAP
4. Explain the difference between a FUNCTION, PROCEDURE and PACKAGE.
They are all named PL/SQL blocks.
Function must return a value. Can be called inside a query.
176
Procedure may or may not return value.
Package is the collection of functions, procedures, variables which can be logically grouped
together.
5. Explain the use of table functions.
6. Name three advisory statistics you can collect.
7. Where in the Oracle directory tree structure are audit traces placed?
8. Explain materialized views and how they are used.
9. When a user process fails, what background process cleans up after it?
PMON
10. What background process refreshes materialized views?
Job Queue Process (CJQ)
11. How would you determine what sessions are connected and what resources they are
waiting for?
v$session,v$session_wait
177
12. Describe what redo logs are.
13. How would you force a log switch?
alter system switch logfile;
14. Give two methods you could use to determine what DDL changes have been made.
15. What does coalescing a tablespace do?
Coalesce simply takes contigous free extents and makes them into a single bigger free extent.
16. What is the difference between a TEMPORARY tablespace and a PERMANENT tablespace?
TEMP tablespace gets cleared once the transaction is done where as PERMANENT tablespace
retails the data.
17. Name a tablespace automatically created when you create a database.
SYSTEM
18. When creating a user, what permissions must you grant to allow them to connect to the
database?
Grant create session to username;
178
19. How do you add a data file to a tablespace?
alter tablespace USERS add datafile '/ora01/oradata/users02.dbf' size 50M;
20. How do you resize a data file?
alter database datafile '/ora01/oradata/users02.dbf' resize 100M;
21. What view would you use to look at the size of a data file?
dba_data_files
22. What view would you use to determine free space in a tablespace?
dba_free_space
23. How would you determine who has added a row to a table?
By implementing an INSERT trigger for logging details during each INSERT operation on the
table
24. How can you rebuild an index?
ALTER INDEX index_name REBUILD;
179
25. Explain what partitioning is and what its benefit is.
A table partition is also a table segment, and by using partitioning technique we can enhance
performance of table access.
26. You have just compiled a PL/SQL package but got errors, how would you view the errors?
show errors
27. How can you gather statistics on a table?
exec dbms_stats.gather_table_stats
Also, remember to analyze all associated indexes on that table using
dbms_stats.gather_index_stats
28. How can you enable a trace for a session?
alter session set sql_trace='TRUE';
29. What is the difference between the SQL*Loader and IMPORT utilities?
SQL*LOADER loads external data which is in OS files to oracle database tables while IMPORT
utility imports data only which is exported by EXPORT utility of oracle database.
180
30. Name two files used for network connection to a database.
TNSNAMES.ORA and SQLNET.ORA
Oracle Interview Questions
1. Describe the difference between a procedure, function and anonymous pl/sql block.
Candidate should mention use of DECLARE statement, a function must return a value while a
procedure doesn't have to.
2. What is a mutating table error and how can you get around it? This happens with triggers. It
occurs because the trigger is trying to update a row it is currently using. The usual fix involves
either use of views or temporary tables so the database is selecting from one while updating
the other.
3. Describe the use of %ROWTYPE and %TYPE in PL/SQL Expected answer: %ROWTYPE allows
you to associate a variable with an entire table row. The %TYPE associates a variable with a
single column type.
4. What packages (if any) has Oracle provided for use by developers? Expected answer: Oracle
provides the DBMS_ series of packages. There are many which developers should be aware of
such as DBMS_SQL, DBMS_PIPE, DBMS_TRANSACTION, DBMS_LOCK, DBMS_ALERT,
DBMS_OUTPUT, DBMS_JOB, DBMS_UTILITY, DBMS_DDL, UTL_FILE. If they can mention a few of
these and describe how they used them, even better. If they include the SQL routines provided
by Oracle, great, but not really what was asked.
181
5. Describe the use of PL/SQL tables Expected answer: PL/SQL tables are scalar arrays that can
be referenced by a binary integer. They can be used to hold values for use in later queries or
calculations. In Oracle 8 they will be able to be of the %ROWTYPE designation, or RECORD.
6. When is a declare statement needed ? The DECLARE statement is used in PL/SQL anonymous
blocks such as with stand alone, non-stored PL/SQL procedures. It must come first in a PL/SQL
stand alone file if it is used.
7. In what order should a open/fetch/loop set of commands in a PL/SQL block be implemented
if you use the %NOTFOUND cursor variable in the exit when statement? Why? Expected
answer: OPEN then FETCH then LOOP followed by the exit when. If not specified in this order
will result in the final return being done twice because of the way the %NOTFOUND is handled
by PL/SQL.
8. What are SQLCODE and SQLERRM and why are they important for PL/SQL developers?
Expected answer: SQLCODE returns the value of the error number for the last error
encountered. The SQLERRM returns the actual error message for the last error encountered.
They can be used in exception handling to report, or, store in an error log table, the error that
occurred in the code. These are especially useful for the WHEN OTHERS exception.
9. How can you find within a PL/SQL block, if a cursor is open? Expected answer: Use the %
ISOPEN cursor status variable.
10. How can you generate debugging output from PL/SQL? Expected answer: Use the
182
DBMS_OUTPUT package. Another possible method is to just use the SHOW ERROR command,
but this only shows errors. The DBMS_OUTPUT package can be used to show intermediate
results from loops and the status of variables as the procedure is executed. The new package
UTL_FILE can also be used.
11. What are the types of triggers? Expected Answer: There are 12 types of triggers in PL/SQL
that consist of combinations of the BEFORE, AFTER, ROW, TABLE, INSERT, UPDATE, DELETE and
ALL key words: BEFORE ALL ROW INSERT AFTER ALL ROW INSERT BEFORE INSERT AFTER INSERT
etc.
Oracle Interview Questions
1. A tablespace has a table with 30 extents in it. Is this bad? Why or why not.
Multiple extents in and of themselves aren?t bad. However if you also have chained rows this
can hurt performance.
2. How do you set up tablespaces during an Oracle installation?
You should always attempt to use the Oracle Flexible Architecture standard or another
partitioning scheme to ensure proper separation of SYSTEM, ROLLBACK, REDO LOG, DATA,
TEMPORARY and INDEX segments.
3. You see multiple fragments in the SYSTEM tablespace, what should you check first?
Ensure that users don?t have the SYSTEM tablespace as their TEMPORARY or DEFAULT
183
tablespace assignment by checking the DBA_USERS view.
4. What are some indications that you need to increase the SHARED_POOL_SIZE parameter?
Poor data dictionary or library cache hit ratios, getting error ORA-04031. Another indication is
steadily decreasing performance with all other tuning parameters the same.
5. What is the general guideline for sizing db_block_size and db_multi_block_read for an
application that does many full table scans?
Oracle almost always reads in 64k chunks. The two should have a product equal to 64 or a
multiple of 64.
6. What is the fastest query method for a table
Fetch by rowid
7. Explain the use of TKPROF? What initialization parameter should be turned on to get full
TKPROF output?
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements.
You use it by first setting timed_statistics to true in the initialization file and then turning on
tracing for either the entire database via the sql_trace parameter or for the session using the
184
ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the
trace file and then look at the output from the tkprof tool. This can also be used to generate
explain plan output.
8. When looking at v$sysstat you see that sorts (disk) is high. Is this bad or good? If bad -How do
you correct it?
If you get excessive disk sorts this is bad. This indicates you need to tune the sort area
parameters in the initialization files. The major sort are parameter is the SORT_AREA_SIZe
parameter.
9. When should you increase copy latches? What parameters control copy latches
When you get excessive contention for the copy latches as shown by the "redo copy" latch hit
ratio. You can increase copy latches via the initialization parameter
LOG_SIMULTANEOUS_COPIES to twice the number of CPUs on your system.
10. Where can you get a list of all initialization parameters for your instance? How about an
indication if they are default settings or have been changed
You can look in the init.ora file for an indication of manually set parameters. For all parameters,
their value and whether or not the current value is the default value, look in the v$parameter
view.
11. Describe hit ratio as it pertains to the database buffers. What is the difference between
185
instantaneous and cumulative hit ratio and which should be used for tuning
The hit ratio is a measure of how many times the database was able to read a value from the
buffers verses how many times it had to re-read a data value from the disks. A value greater
than 80-90% is good, less could indicate problems. If you simply take the ratio of existing
parameters this will be a cumulative value since the database started. If you do a comparison
between pairs of readings based on some arbitrary time span, this is the instantaneous ratio for
that time span. Generally speaking an instantaneous reading gives more valuable data since it
will tell you what your instance is doing for the time it was generated over.
12. Discuss row chaining, how does it happen? How can you reduce it? How do you correct it
Row chaining occurs when a VARCHAR2 value is updated and the length of the new value is
longer than the old value and won?t fit in the remaining block space. This results in the row
chaining to another block. It can be reduced by setting the storage parameters on the table to
appropriate values. It can be corrected by export and import of the effected table.
Oracle Interview Questions
1. Give one method for transferring a table from one schema to another:
There are several possible methods, export-import, CREATE TABLE... AS SELECT, or COPY.
2. What is the purpose of the IMPORT option IGNORE? What is it?s default setting
186
The IMPORT IGNORE option tells import to ignore "already exists" errors. If it is not specified
the tables that already exist will be skipped. If it is specified, the error is ignored and the tables
data will be inserted. The default value is N.
3. You have a rollback segment in a version 7.2 database that has expanded beyond optimal,
how can it be restored to optimal
Use the ALTER TABLESPACE ..... SHRINK command.
4. If the DEFAULT and TEMPORARY tablespace clauses are left out of a CREATE USER command
what happens? Is this bad or good? Why
The user is assigned the SYSTEM tablespace as a default and temporary tablespace. This is bad
because it causes user objects and temporary segments to be placed into the SYSTEM
tablespace resulting in fragmentation and improper table placement (only data dictionary
objects and the system rollback segment should be in SYSTEM).
5. What are some of the Oracle provided packages that DBAs should be aware of
Oracle provides a number of packages in the form of the DBMS_ packages owned by the SYS
user. The packages used by DBAs may include: DBMS_SHARED_POOL, DBMS_UTILITY,
DBMS_SQL, DBMS_DDL, DBMS_SESSION, DBMS_OUTPUT and DBMS_SNAPSHOT. They may also
try to answer with the UTL*.SQL or CAT*.SQL series of SQL procedures. These can be viewed as
extra credit but aren?t part of the answer.
187
6. What happens if the constraint name is left out of a constraint clause
The Oracle system will use the default name of SYS_Cxxxx where xxxx is a system generated
number. This is bad since it makes tracking which table the constraint belongs to or what the
constraint does harder.
7. What happens if a tablespace clause is left off of a primary key constraint clause
This results in the index that is automatically generated being placed in then users default
tablespace. Since this will usually be the same tablespace as the table is being created in, this
can cause serious performance problems.
8. What is the proper method for disabling and re-enabling a primary key constraint
You use the ALTER TABLE command for both. However, for the enable clause you must specify
the USING INDEX and TABLESPACE clause for primary keys.
9. What happens if a primary key constraint is disabled and then enabled without fully
specifying the index clause
The index is created in the user?s default tablespace and all sizing information is lost. Oracle
doesn?t store this information as a part of the constraint definition, but only as part of the
index definition, when the constraint was disabled the index was dropped and the information
is gone.
188
10. (On UNIX) When should more than one DB writer process be used? How many should be
used
If the UNIX system being used is capable of asynchronous IO then only one is required, if the
system is not capable of asynchronous IO then up to twice the number of disks used by Oracle
number of DB writers should be specified by use of the db_writers initialization parameter.
11. You are using hot backup without being in archivelog mode, can you recover in the event of
a failure? Why or why not
You can?t use hot backup without being in archivelog mode. So no, you couldn?t recover.
12. What causes the "snapshot too old" error? How can this be prevented or mitigated
This is caused by large or long running transactions that have either wrapped onto their own
rollback space or have had another transaction write on part of their rollback space. This can be
prevented or mitigated by breaking the transaction into a set of smaller transactions or
increasing the size of the rollback segments and their extents.
13. How can you tell if a database object is invalid By checking the status column of the DBA_,
ALL_ or USER_OBJECTS views, depending upon whether you own or only have permission on
the view or are using a DBA account.
189
13. A user is getting an ORA-00942 error yet you know you have granted them permission on
the table, what else should you check
You need to check that the user has specified the full name of the object (select empid from
scott.emp; instead of select empid from emp;) or has a synonym that balls to the object (create
synonym emp for scott.emp;)
14. A developer is trying to create a view and the database won?t let him. He has the
"DEVELOPER" role which has the "CREATE VIEW" system privilege and SELECT grants on the
tables he is using, what is the problem
You need to verify the developer has direct grants on all tables used in the view. You can?t
create a stored object with grants given through views.
15. If you have an example table, what is the best way to get sizing data for the production
table implementation
The best way is to analyze the table and then use the data provided in the DBA_TABLES view to
get the average row length and other pertinent data for the calculation. The quick and dirty way
is to look at the number of blocks the table is actually using and ratio the number of rows in the
table to its number of blocks against the number of expected rows.
16. How can you find out how many users are currently logged into the database? How can you
find their operating system id
There are several ways. One is to look at the v$session or v$process views. Another way is to
190
check the current_logins parameter in the v$sysstat view. Another if you are on UNIX is to do a
"ps -ef|grep oracle|wc -l? command, but this only works against a single instance installation.
17. A user selects from a sequence and gets back two values, his select is: SELECT
pk_seq.nextval FROM dual;What is the problem Somehow two values have been inserted into
the dual table. This table is a single row, single column table that should only have one value in
it.
18. How can you determine if an index needs to be dropped and rebuilt
Run the ANALYZE INDEX command on the index to validate its structure and then calculate the
ratio of LF_BLK_LEN/LF_BLK_LEN+BR_BLK_LEN and if it isn?t near 1.0 (i.e. greater than 0.7 or
so) then the index should be rebuilt. Or if the ratio BR_BLK_LEN/ LF_BLK_LEN+BR_BLK_LEN is
nearing 0.3.
Oracle Interview Questions
1. How can variables be passed to a SQL routine
By use of the & symbol. For passing in variables the numbers 1-8 can be used (&1, &2,...,&8) to
pass the values after the command into the SQLPLUS session. To be prompted for a specific
variable, place the ampersanded variable in the code itself: "select * from dba_tables where
owner=&owner_name;" . Use of double ampersands tells SQLPLUS to resubstitute the value for
each subsequent use of the variable, a single ampersand will cause a reprompt for the value
unless an ACCEPT statement is used to get the value from the user.
191
2. You want to include a carriage return/linefeed in your output from a SQL script, how can you
do this
The best method is to use the CHR() function (CHR(10) is a return/linefeed) and the
concatenation function "||". Another method, although it is hard to document and isn?t always
portable is to use the return/linefeed as a part of a quoted string.
3. How can you call a PL/SQL procedure from SQL
By use of the EXECUTE (short form EXEC) command.
4. How do you execute a host operating system command from within SQL
By use of the exclamation ball "!" (in UNIX and some other OS) or the HOST (HO) command.
5. You want to use SQL to build SQL, what is this called and give an example
This is called dynamic SQL. An example would be: set lines 90 pages 0 termout off feedback off
verify off spool drop_all.sql select ?drop user ?||username||? cascade;? from dba_users where
username not in ("SYS?,?SYSTEM?); spool off Essentially you are looking to see that they know
to include a command (in this case DROP USER...CASCADE;) and that you need to concatenate
using the ?||? the values selected from the database.
192
6. What SQLPlus command is used to format output from a select
This is best done with the COLUMN command.
7. You want to group the following set of select returns, what can you group on
Max(sum_of_cost), min(sum_of_cost), count(item_no), item_no The only column that can be
grouped on is the "item_no" column, the rest have aggregate functions associated with them.
8. What special Oracle feature allows you to specify how the cost based system treats a SQL
statement
The COST based system allows the use of HINTs to control the optimizer path selection. If they
can give some example hints such as FIRST ROWS, ALL ROWS, USING INDEX, STAR, even better.
9. You want to determine the location of identical rows in a table before attempting to place a
unique index on the table, how can this be done
Oracle tables always have one guaranteed unique column, the rowid column. If you use a
min/max function against your rowid and then select against the proposed primary key you can
squeeze out the rowids of the duplicate rows pretty quick. For example: select rowid from emp
e where e.rowid > (select min(x.rowid) from emp x where x.emp_no = e.emp_no); In the
situation where multiple columns make up the proposed key, they must all be used in the
where clause.
193
10. What is a Cartesian product
A Cartesian product is the result of an unrestricted join of two or more tables. The result set of
a three table Cartesian product will have x * y * z number of rows where x, y, z correspond to
the number of rows in each table involved in the join.
11. You are joining a local and a remote table, the network manager complains about the traffic
involved, how can you reduce the network traffic Push the processing of the remote data to the
remote instance by using a view to pre-select the information for the join. This will result in only
the data required for the join being sent across.
11. What is the default ordering of an ORDER BY clause in a SELECT statement
Ascending
12. What is tkprof and how is it used
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements.
You use it by first setting timed_statistics to true in the initialization file and then turning on
tracing for either the entire database via the sql_trace parameter or for the session using the
ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the
trace file and then look at the output from the tkprof tool. This can also be used to generate
explain plan output.
194
13. What is explain plan and how is it used
The EXPLAIN PLAN command is a tool to tune SQL statements. To use it you must have an
explain_table generated in the user you are running the explain plan for. This is created using
the utlxplan.sql script. Once the explain plan table exists you run the explain plan command
giving as its argument the SQL statement to be explained. The explain_plan table is then
queried to see the execution plan of the statement. Explain plans can also be run using tkprof.
14. How do you set the number of lines on a page of output? The width
The SET command in SQLPLUS is used to control the number of lines generated per page and
the width of those lines, for example SET PAGESIZE 60 LINESIZE 80 will generate reports that are
60 lines long with a line width of 80 characters. The PAGESIZE and LINESIZE options can be
shortened to PAGES and LINES.
15. How do you prevent output from coming to the screen
The SET option TERMOUT controls output to the screen. Setting TERMOUT OFF turns off screen
output. This option can be shortened to TERM.
16. How do you prevent Oracle from giving you informational messages during and after a SQL
statement execution
195
The SET options FEEDBACK and VERIFY can be set to OFF.
17. How do you generate file output from SQL
By use of the SPOOL comm
Oracle Interview Questions
1. What is a CO-RELATED SUBQUERY
A CO-RELATED SUBQUERY is one that has a correlation name as table or view designator in the
FROM clause of the outer query and the same correlation name as a qualifier of a search
condition in the WHERE clause of the subquery. eg
SELECT field1 from table1 X
WHERE field2>(select avg(field2) from table1 Y
where
field1=X.field1);
(The subquery in a correlated subquery is revaluated for every row of the table or view named
in the outer query.)
2. What are various joins used while writing SUBQUERIES
196
Self join-Its a join foreign key of a table references the same table.
Outer Join--Its a join condition used where One can query all the rows of one of the tables in
the join condition even though they don't satisfy the join condition.
Equi-join--Its a join condition that retrieves rows from one or more tables in which one or more
columns in one table are equal to one or more columns in the second table.
3. What are various constraints used in SQL
NULL NOT NULL CHECK DEFAULT
4. What are different Oracle database objects
TABLES VIEWS INDEXES SYNONYMS SEQUENCES TABLESPACES etc
5. What is difference between Rename and Alias
Rename is a permanent name given to a table or column whereas Alias is a temporary name
given to a table or column which do not exist once the SQL statement is executed.
6. What is a view
A view is stored procedure based on one or more tables, its a virtual table.
197
7. What are various privileges that a user can grant to another user
SELECT CONNECT RESOURCE
8. What is difference between UNIQUE and PRIMARY KEY constraints
A table can have only one PRIMARY KEY whereas there can be any number of UNIQUE keys. The
columns that compose PK are automatically define NOT NULL, whereas a column that compose
a UNIQUE is not automatically defined to be mandatory must also specify the column is NOT
NULL.
9. Can a primary key contain more than one columns
Yes
10. How you will avoid duplicating records in a query
By using DISTINCT
11. What is difference between SQL and SQL*PLUS
SQL*PLUS is a command line tool where as SQL and PL/SQL language interface and reporting
tool. Its a command line tool that allows user to type SQL commands to be executed directly
against an Oracle database. SQL is a language used to query the relational database
(DML,DCL,DDL). SQL*PLUS commands are used to format query result, Set options, Edit SQL
commands and PL/SQL.
198
12. Which datatype is used for storing graphics and images
LONG RAW data type is used for storing BLOB's (binary large objects).
13. How will you delete duplicating rows from a base table
DELETE FROM table_name A WHERE rowid>(SELECT min(rowid) from table_name B where
B.table_no=A.table_no);
CREATE TABLE new_table AS SELECT DISTINCT * FROM old_table;
DROP old_table RENAME new_table TO old_table DELETE FROM table_name A WHERE rowid
NOT IN (SELECT MAX(ROWID) FROM table_name GROUP BY column_name)
14. What is difference between SUBSTR and INSTR
SUBSTR returns a specified portion of a string eg SUBSTR('BCDEF',4) output BCDE INSTR
provides character position in which a pattern is found in a string.
eg INSTR('ABC-DC-F','-',2) output 7 (2nd occurence of '-')
15. There is a string '120000 12 0 .125' ,how you will find the position of the decimal place
INSTR('120000 12 0 .125','.',1) output 13
199
16. There is a '%' sign in one field of a column. What will be the query to find it.
'\' Should be used before '%'.
17. When you use WHERE clause and when you use HAVING clause
HAVING clause is used when you want to specify a condition for a group function and it is
written after GROUP BY clause The WHERE clause is used when you want to specify a condition
for columns, single row functions except group functions and it is written before GROUP BY
clause if it is used.
18. Which is more faster - IN or EXISTS
EXISTS is more faster than IN because EXISTS returns a Boolean value whereas IN returns a
value.
Appropriate answer will be....
Result of the subquery is small Then "IN" is typicaly more appropriate. and Result of the
subquery is big/large/long Then "EXIST" is more appropriate.
19. What is a OUTER JOIN
Outer Join--Its a join condition used where you can query all the rows of one of the tables in the
join condition even though they dont satisfy the join condition.
200
20. How you will avoid your query from using indexes
SELECT * FROM emp Where emp_no+' '=12345;
i.e you have to concatenate the column name with space within codes in the where condition.
SELECT /*+ FULL(a) */ ename, emp_no from emp where emp_no=1234;
i.e using HINTS
Oracle Interview Questions
1. What is a pseudo column. Give some examples
It is a column that is not an actual column in the table.
eg USER, UID, SYSDATE, ROWNUM, ROWID, NULL, AND LEVEL.
Suppose customer table is there having different columns like customer no, payments.What will
be the query to select top three max payments.
For top N queries, see http://www.orafaq.com/forum/mv/msg/160920/472554/102589/#msg_
472554 post
2. What is the purpose of a cluster.
201
Oracle does not allow a user to specifically locate tables, since that is a part of the function of
the RDBMS. However, for the purpose of increasing performance, oracle allows a developer to
create a CLUSTER. A CLUSTER provides a means for storing data from different tables together
for faster retrieval than if the table placement were left to the RDBMS.
3. What is a cursor.
Oracle uses work area to execute SQL statements and store processing information PL/SQL
construct called a cursor lets you name a work area and access its stored information A cursor is
a mechanism used to fetch more than one row in a Pl/SQl block.
4. Difference between an implicit & an explicit cursor.
PL/SQL declares a cursor implicitly for all SQL data manipulation statements, including quries
that return only one row. However,queries that return more than one row you must declare an
explicit cursor or use a cursor FOR loop.
Explicit cursor is a cursor in which the cursor name is explicitly assigned to a SELECT statement
via the CURSOR...IS statement. An implicit cursor is used for all SQL statements Declare, Open,
Fetch, Close. An explicit cursors are used to process multirow SELECT statements An implicit
cursor is used to process INSERT, UPDATE, DELETE and single row SELECT. .INTO statements.
5. What are cursor attributes
%ROWCOUNT %NOTFOUND %FOUND %ISOPEN
6. What is a cursor for loop.
Cursor For Loop is a loop where oracle implicitly declares a loop variable, the loop index that of
202
the same record type as the cursor's record.
7. Difference between NO DATA FOUND and %NOTFOUND
NO DATA FOUND is an exception raised only for the SELECT....INTO statements when the where
clause of the querydoes not match any rows. When the where clause of the explicit cursor does
not match any rows the %NOTFOUND attribute is set to TRUE instead.
8. What a SELECT FOR UPDATE cursor represent.
SELECT......FROM......FOR......UPDATE[OF column-reference][NOWAIT] The processing done in a
fetch loop modifies the rows that have been retrieved by the cursor. A convenient way of
modifying the rows is done by a method with two parts: the FOR UPDATE clause in the cursor
declaration, WHERE CURRENT OF CLAUSE in an UPDATE or declaration statement.
9. What 'WHERE CURRENT OF ' clause does in a cursor.
LOOP
SELECT num_credits INTO v_numcredits FROM classes
WHERE dept=123 and course=101;
UPDATE students
SET current_credits=current_credits+v_numcredits
WHERE CURRENT OF X;
END LOOP
COMMIT;
203
END;
10. What is use of a cursor variable? How it is defined.
A cursor variable is associated with different statements at run time, which can hold different
values at run time. Static cursors can only be associated with one run time query. A cursor
variable is reference type(like a pointer in C). Declaring a cursor variable: TYPE type_name IS
REF CURSOR RETURN return_type type_name is the name of the reference type,return_type is
a record type indicating the types of the select list that will eventually be returned by the cursor
variable.
11. What should be the return type for a cursor variable.Can we use a scalar data type as return
type.
The return type for a cursor must be a record type.It can be declared explicitly as a user-defined
or %ROWTYPE can be used. eg TYPE t_studentsref IS REF CURSOR RETURN students%ROWTYPE
12. How you open and close a cursor variable.Why it is required.
OPEN cursor variable FOR SELECT...Statement CLOSE cursor variable In order to associate a
cursor variable with a particular SELECT statement OPEN syntax is used.In order to free the
resources used for the query CLOSE statement is used.
13. How you were passing cursor variables in PL/SQL 2.2.
In PL/SQL 2.2 cursor variables cannot be declared in a package.This is because the storage for a
cursor variable has to be allocated using Pro*C or OCI with version 2.2,the only means of
passing a cursor variable to a PL/SQL block is via bind variable or a procedure parameter.
204
14. Can cursor variables be stored in PL/SQL tables.If yes how.If not why.
No, a cursor variable points a row which cannot be stored in a two-dimensional PL/SQL table.
15. Difference between procedure and function.
Functions are named PL/SQL blocks that return a value and can be called with arguments
procedure a named block that can be called with parameter. A procedure all is a PL/SQL
statement by itself, while a Function call is called as part of an expression.
16. What are different modes of parameters used in functions and procedures.
IN OUT INOUT
17. What is difference between a formal and an actual parameter
The variables declared in the procedure and which are passed, as arguments are called actual,
the parameters in the procedure declaration. Actual parameters contain the values that are
passed to a procedure and receive results. Formal parameters are the placeholders for the
values of actual parameters
18. Can the default values be assigned to actual parameters.
Yes
205
19. Can a function take OUT parameters.If not why.
Yes. A function return a value, but can also have one or more OUT parameters. it is best
practice, however to use a procedure rather than a function if you have multiple values to
return.
20. What is syntax for dropping a procedure and a function .Are these operations possible.
Drop Procedure procedure_name
Drop Function function_name
21. What are ORACLE PRECOMPILERS.
Using ORACLE PRECOMPILERS ,SQL statements and PL/SQL blocks can be contained inside 3GL
programs written in C,C++,COBOL,PASCAL, FORTRAN,PL/1 AND ADA. The Precompilers are
known as Pro*C,Pro*Cobol,... This form of PL/SQL is known as embedded pl/sql,the language in
which pl/sql is embedded is known as the host language. The prcompiler translates the
embedded SQL and pl/sql ststements into calls to the precompiler runtime library.The output
must be compiled and linked with this library to creater an executable.
22. What is OCI. What are its uses.
Oracle Call Interface is a method of accesing database from a 3GL program. Uses--No
precompiler is required,PL/SQL blocks are executed like other DML statements.
206
The OCI library provides
-functions to parse SQL statemets
-bind input variables
-bind output variables
-execute statements
-fetch the results
23. Difference between database triggers and form triggers.
a) Data base trigger(DBT) fires when a DML operation is performed on a data base table.Form
trigger(FT) Fires when user presses a key or navigates between fields on the screen b) Can be
row level or statement level No distinction between row level and statement level. c) Can
manipulate data stored in Oracle tables via SQL Can manipulate data in Oracle tables as well as
variables in forms. d) Can be fired from any session executing the triggering DML statements.
Can be fired only from the form that define the trigger. e) Can cause other database triggers to
fire.Can cause other database triggers to fire,but not other form triggers.
24. What is an UTL_FILE.What are different procedures and functions associated
with it. UTL_FILE is a package that adds the ability to read and write to operating system files
Procedures associated with it are FCLOSE, FCLOSE_ALL and 5 procedures to output data to a file
PUT, PUT_LINE, NEW_LINE, PUTF, FFLUSH.PUT, FFLUSH.PUT_LINE,FFLUSH.NEW_LINE. Functions
associated with it are FOPEN, ISOPEN.
25. Can you use a commit statement within a database trigger.
No
207
26. What is the maximum buffer size that can be specified using the DBMS_OUTPUT.ENABLE
function?
1,000,000
Oracle Interview Questions
1. When looking at the estat events report you see that you are getting busy buffer waits. Is this
bad? How can you find what is causing it
Buffer busy waits could indicate contention in redo, rollback or data blocks. You need to check
the v$waitstat view to see what areas are causing the problem. The value of the "count"
column tells where the problem is, the "class" column tells you with what. UNDO is rollback
segments, DATA is data base buffers.
2. If you see contention for library caches how can you fix it
Increase the size of the shared pool.
3. If you see statistics that deal with "undo" what are they really talking about
Rollback segments and associated structures.
4. If a tablespace has a default pctincrease of zero what will this cause (in relationship to the
208
smon process)
The SMON process won?t automatically coalesce its free space fragments.
5. If a tablespace shows excessive fragmentation what are some methods to defragment the
tablespace? (7.1,7.2 and 7.3 only)
In Oracle 7.0 to 7.2 The use of the 'alter session set events 'immediate trace name coalesce
level ts#';? command is the easiest way to defragment contiguous free space fragmentation.
The ts# parameter corresponds to the ts# value found in the ts$ SYS table. In version 7.3 the ?
alter tablespace coalesce;? is best. If the free space isn?t contiguous then export, drop and
import of the tablespace contents may be the only way to reclaim non-contiguous free space.
6. How can you tell if a tablespace has excessive fragmentation
If a select against the dba_free_space table shows that the count of a tablespaces extents is
greater than the count of its data files, then it is fragmented.
7. You see the following on a status report: redo log space requests 23 redo log space wait time
0 Is this something to worry about? What if redo log space wait time is high? How can you fix
this Since the wait time is zero, no. If the wait time was high it might indicate a need for more
or larger redo logs.
8. What can cause a high value for recursive calls? How can this be fixed
209
A high value for recursive calls is cause by improper cursor usage, excessive dynamic space
management actions, and or excessive statement re-parses. You need to determine the cause
and correct it By either relinking applications to hold cursors, use proper space management
techniques (proper storage and sizing) or ensure repeat queries are placed in packages for
proper reuse.
9. If you see a pin hit ratio of less than 0.8 in the estat library cache report is this a problem? If
so, how do you fix it
This indicate that the shared pool may be too small. Increase the shared pool size.
10. If you see the value for reloads is high in the estat library cache report is this a matter for
concern
Yes, you should strive for zero reloads if possible. If you see excessive reloads then increase the
size of the shared pool.
11. You look at the dba_rollback_segs view and see that there is a large number of shrinks and
they are of relatively small size, is this a problem? How can it be fixed if it is a problem
A large number of small shrinks indicates a need to increase the size of the rollback segment
extents. Ideally you should have no shrinks or a small number of large shrinks. To fix this just
increase the size of the extents and adjust optimal accordingly.
210
12. You look at the dba_rollback_segs view and see that you have a large number of wraps is
this a problem
A large number of wraps indicates that your extent size for your rollback segments are probably
too small. Increase the size of your extents to reduce the number of wraps. You can look at the
average transaction size in the same view to get the information on transaction size.
Oracle Interview Questions
1. You have just started a new instance with a large SGA on a busy existing server. Performance
is terrible, what should you check for
The first thing to check with a large SGA is that it isn?t being swapped out.
2. What OS user should be used for the first part of an Oracle installation (on UNIX)
You must use root first.
3. When should the default values for Oracle initialization parameters be used as is
Never
211
4. How many control files should you have? Where should they be located
At least 2 on separate disk spindles. Be sure they say on separate disks, not just file systems.
5. How many redo logs should you have and how should they be configured for maximum
recoverability
You should have at least three groups of two redo logs with the two logs each on a separate
disk spindle (mirrored by Oracle). The redo logs should not be on raw devices on UNIX if it can
be avoided.
6. You have a simple application with no "hot" tables (i.e. uniform IO and access requirements).
How many disks should you have assuming standard layout for SYSTEM, USER, TEMP and
ROLLBACK tablespaces
At least 7, see disk configuration answer above.
7. Describe third normal form
Something like: In third normal form all attributes in an entity are related to the primary key
and only to the primary key
212
8. Is the following statement true or false:
"All relational databases must be in third normal form" False. While 3NF is good for logical
design most databases, if they have more than just a few tables, will not perform well using full
3NF. Usually some entities will be denormalized in the logical to physical transfer process.
9. What is an ERD
An ERD is an Entity-Relationship-Diagram. It is used to show the entities and relationships for a
database logical model.
10. Why are recursive relationships bad? How do you resolve them
A recursive relationship (one where a table relates to itself) is bad when it is a hard relationship
(i.e. neither side is a "may" both are "must") as this can result in it not being possible to put in a
top or perhaps a bottom of the table (for example in the EMPLOYEE table you couldn?t put in
the PRESIDENT of the company because he has no boss, or the junior janitor because he has no
subordinates). These type of relationships are usually resolved by adding a small intersection
entity.
11. What does a hard one-to-one relationship mean (one where the relationship on both ends is
"must")
Expected answer: This means the two entities should probably be made into one entity.
213
12. How should a many-to-many relationship be handled
By adding an intersection entity table
13. What is an artificial (derived) primary key? When should an artificial (or derived) primary
key be used
A derived key comes from a sequence. Usually it is used when a concatenated key becomes too
cumbersome to use as a foreign key.
Oracle Interview Questions
1. When should you consider denormalization
Whenever performance analysis indicates it would be beneficial to do so without compromising
data integrity.
2. How can you determine if an Oracle instance is up from the operating system level
There are several base Oracle processes that will be running on multi-user operating systems,
these will be smon, pmon, dbwr and lgwr. Any answer that has them using their operating
system process showing feature to check for these is acceptable. For example, on UNIX a ps ef|grep dbwr will show what instances are up.
214
3. Users from the PC clients are getting messages indicating : ORA-06114: (Cnct err, can't get err
txt. See Servr Msgs & Codes Manual)
What could the problem be The instance name is probably incorrect in their connection string.
4. Users from the PC clients are getting the following error stack: ERROR: ORA-01034: ORACLE
not available ORA-07318: smsget: open error when opening sgadef.dbf file. HP-UX Error: 2: No
such file or directory
What is the probable cause The Oracle instance is shutdown that they are trying to access,
restart the instance.
5. How can you determine if the SQLNET process is running for SQLNET V1? How about V2
For SQLNET V1 check for the existence of the orasrv process. You can use the command "tcpctl
status" to get a full status of the V1 TCPIP server, other protocols have similar command
formats. For SQLNET V2 check for the presence of the LISTENER process(s) or you can issue the
command "lsnrctl status".
6. What file will give you Oracle instance status information? Where is it located
215
The alert.ora log. It is located in the directory specified by the background_dump_dest
parameter in the v$parameter table.
7. Users aren?t being allowed on the system. The following message is received: ORA-00257
archiver is stuck. Connect internal only, until freed What is the problem The archive destination
is probably full, backup the archive logs and remove them and the archiver will re-start.
8. Where would you look to find out if a redo log was corrupted assuming you are using Oracle
mirrored redo logs
There is no message that comes to the SQLDBA or SRVMGR programs during startup in this
situation, you must check the alert.log file for this information.
9. You attempt to add a datafile and get: ORA-01118: cannot add anymore datafiles: limit of 40
exceeded What is the problem and how can you fix it When the database was created the
db_files parameter in the initialization file was set to 40. You can shutdown and reset this to a
higher value, up to the value of MAX_DATAFILES as specified at database creation. If the
MAX_DATAFILES is set to low, you will have to rebuild the control file to increase it before
proceeding.
10. You look at your fragmentation report and see that smon hasn?t coalesced any of you
tablespaces, even though you know several have large chunks of contiguous free extents. What
is the problem
216
Check the dba_tablespaces view for the value of pct_increase for the tablespaces. If
pct_increase is zero, smon will not coalesce their free space.
11. Your users get the following error: ORA-00055 maximum number of DML locks exceeded
What is the problem and how do you fix it The number of DML Locks is set by the initialization
parameter DML_LOCKS. If this value is set to low (which it is by default) you will get this error.
Increase the value of DML_LOCKS. If you are sure that this is just a temporary problem, you can
have them wait and then try again later and the error should clear.
12. You get a call from you backup DBA while you are on vacation. He has corrupted all of the
control files while playing with the ALTER DATABASE BACKUP CONTROLFILE command. What do
you do
As long as all datafiles are safe and he was successful with the BACKUP controlfile command
you can do the following: CONNECT INTERNAL STARTUP MOUNT (Take any read-only
tablespaces offline before next step ALTER DATABASE DATAFILE .... OFFLINE;) RECOVER
DATABASE USING BACKUP CONTROLFILE ALTER DATABASE OPEN RESETLOGS; (bring read-only
tablespaces back online) Shutdown and backup the system, then restart If they have a recent
output file from the ALTER DATABASE BACKUP CONTROL FILE TO TRACE; command, they can
use that to recover as well. If no backup of the control file is available then the following will be
required: CONNECT INTERNAL STARTUP NOMOUNT CREATE CONTROL FILE .....; However, they
will need to know all of the datafiles, logfiles, and settings for MAXLOGFILES,
MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES for the database to use the command.
Oracle Interview Questions
1. How would you determine the time zone under which a database was operating? 2. Explain
217
the use of setting GLOBAL_NAMES equal to TRUE. 3. What command would you use to encrypt
a PL/SQL application? 4. Explain the difference between a FUNCTION, PROCEDURE and
PACKAGE. 5. Explain the use of table functions. 6. Name three advisory statistics you can collect.
7. Where in the Oracle directory tree structure are audit traces placed? 8. Explain materialized
views and how they are used. 9. When a user process fails, what background process cleans up
after it? 10. What background process refreshes materialized views? 11. How would you
determine what sessions are connected and what resources they are waiting for? 12. Describe
what redo logs are. 13. How would you force a log switch? 14. Give two methods you could use
to determine what DDL changes have been made. 15. What does coalescing a tablespace do?
16. What is the difference between a TEMPORARY tablespace and a PERMANENT tablespace?
17. Name a tablespace automatically created when you create a database. 18. When creating a
user, what permissions must you grant to allow them to connect to the database? 19. How do
you add a data file to a tablespace? 20. How do you resize a data file? 21. What view would you
use to look at the size of a data file? 22. What view would you use to determine free space in a
tablespace? 23. How would you determine who has added a row to a table? 24. How can you
rebuild an index? 25. Explain what partitioning is and what its benefit is. 26. You have just
compiled a PL/SQL package but got errors, how would you view the errors? 27. How can you
gather statistics on a table? 28. How can you enable a trace for a session? 29. What is the
difference between the SQL*Loader and IMPORT utilities? 30. Name two files used for network
connection to a database.
Oracle Interview Questions
1. In a system with an average of 40 concurrent users you get the following from a query on
rollback extents:
ROLLBACK CUR EXTENTS
--------------------------
R01 11 R02 8 R03 12 R04 9 SYSTEM 4 2. You have room for each to grow by 20 more extents
each. Is there a problem? Should you take any action
No there is not a problem. You have 40 extents showing and an average of 40 concurrent users.
Since there is plenty of room to grow no action is needed.
218
3. You see multiple extents in the temporary tablespace. Is this a problem
As long as they are all the same size this isn?t a problem. In fact, it can even improve
performance since Oracle won?t have to create a new extent when a user needs one.
4. Define OFA.
OFA stands for Optimal Flexible Architecture. It is a method of placing directories and files in an
Oracle system so that you get the maximum flexibility for future tuning and file placement.
5. How do you set up your tablespace on installation
The answer here should show an understanding of separation of redo and rollback, data and
indexes and isolation os SYSTEM tables from other tables. An example would be to specify that
at least 7 disks should be used for an Oracle installation so that you can place SYSTEM
tablespace on one, redo logs on two (mirrored redo logs) the TEMPORARY tablespace on
another, ROLLBACK tablespace on another and still have two for DATA and INDEXES. They
should indicate how they will handle archive logs and exports as well. As long as they have a
logical plan for combining or further separation more or less disks can be specified.
6. What should be done prior to installing Oracle (for the OS and the disks)
adjust kernel parameters or OS tuning parameters in accordance with installation guide. Be sure
219
enough contiguous disk space is available.
7. You have installed Oracle and you are now setting up the actual instance. You have been
waiting an hour for the initialization script to finish, what should you check first to determine if
there is a problem
Check to make sure that the archiver isn?t stuck. If archive logging is turned on during install a
large number of logs will be created. This can fill up your archive log destination causing Oracle
to stop to wait for more space.
8. When configuring SQLNET on the server what files must be set up
INITIALIZATION file, TNSNAMES.ORA file, SQLNET.ORA file
9. When configuring SQLNET on the client what files need to be set up
SQLNET.ORA, TNSNAMES.ORA
10. What must be installed with ODBC on the client in order for it to work with Oracle
SQLNET and PROTOCOL (for example: TCPIP adapter) layers of the transport programs.
General Oracle Questions
220
* What Oracle products have you worked with?
* What version of Oracle were you running?
* Compare Oracle to any other database that you know. Why would you prefer to work on
one and not on the other?
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
Interview Questions
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
Oracle Apps DBA Cloning Interview Questions/FAQs
Oracle Apps DBA EBS (E-Business Suite)
Cloning Interview Questions/FAQs
1. What is cloning and why is it required?
• Cloning is the process of creating an identical copy of the Oracle application system.
It is required due to following reasons
• Creating a test copy of your production system before upgrading.
221
• Moving an existing system to a different machine.
• To test some patches
• Creating a development copy of your environment to be used by the developers.
2. What is rapid clone?
Ans : Rapid Clone is the new cloning utility introduced in Release 11.5.8. Rapid Clone leverages
the new installation and configuration technology utilized by Rapid Install
3. How do I determine if my system is rapid clone enabled?
Ans : First, verify system is AutoConfig enabled. Then, verify that you have applied the latest
Rapid Clone patch.
4. Explain the cloning process?
Ans :
1. Run adpreclone as applmgr and oracle user on source Perl adpreclone.pl dbTier as oracle user
Perl adpreclone.pl appsTier as applmgr user
2. Take the cold/hotbackup of source database
3. Copy the five directories appl,comn,ora , db,data to target
4. Rename the directories, and change the permisssion
5. Set the inventory in oraInst.loc
6. Run perl adcfgclone.pl dbTier as oracle user,if the backup type is cold
7. If the backup type is hotbackup then Perl adcfgclone.pl dbTechStack. Create the control file
on target from the control script trace file from source Recover the database Alter database
open resetlogs
8. Run autoconfig with the ports changed as per requirement in xml.
9. Run perl adcfgclone.pl appsTier as applmgr
222
10. Run autoconfig with the ports changed as per requirement in xml.
5. What are the steps to clone from a single node to a multi-node?
• You must login as the owner of file system once the database cloning is done.
• Run the adcfgclone.pl from the common_top/clone bin.
• Accept for target system having more than one application tier server node.
• Collect the details for processing node, admin node, forms node, and web node.
• Now you get a prompt for the various mount point details and it creates the context file for
you.
• Follow the same steps from all the nodes.
6. What are the files you need to copy from APPL_TOP for creating a clone application system?
• APPL_TOP
• OA_HTML
• OA_JAVA
• OA_JRE_TOP
• COMMON_TOP>/util
• COMMON_TOP>/clone
• 806 ORACLE_HOME
• iAS ORACLE_HOME
7. Does clone preserve the patch history?
• Yes, Rapid clone preserves the patch history in following locations
• RDBMS ORACLE_HOME: preserves the OUI oraInventory.
• iAS ORACLE_HOME: preserves the OUI oraInventory
223
• 806 ORACLE_HOME: preserves the patch level and Oracle inventory
• APPL_TOP and Database: preserves the patch level and history tables.
8. What are the scripts do you use while Apps cloning?
• A type of script that’s made a comeback in hot scripts is clone script.
• adpreclone.pl prepares the source system and adcfgclone.pl configures the target system.
• Both the scripts are located in $COMMON_TOP/admin/scripts/contextname directory.
• Adpreclone.pl collects information about the database.
• It also creates generic templates of files containing source specified hardcore values.
9. What are the pre-upgrade steps that need to be taken for the up gradation of non_11i
instance to 11.5.10?
• First, you need to take a complete backup of the application system.
• Run the TUMS utility
• Review the TUMS report
• Maintain the multilingual tables
• Rename the custom database objects
• Check attachment file upload directory
• You need to save the custom.pll
10. How often Do you clone?
Ans: Cloning happens biweekly or monthly depending on the organization requirement.
11. How much time does it take to upgrade, clone?
Ans: Clone usually takes around 48hrs to copy and configure and upgrade depends on the
224
database size and module involved. upgrade from 11.5.9 to 11.5.10.2 will take around 3-4 days
and 11i to R12 upgrade will take around 4-5 days.
12. When do we run FND_CONC_CLONE.SETUP_CLEAN?
Ans:
FND_NODES table contains node information, If you have cloned test instance from production
still the node information of production will be present after clone in the test instance.
we use FND_CONC_CLONE.SETUP_CLEAN to cleanup FND_NODES table in the target to clear
source node information as part of cloning.
Below syntax to execute:
SQL> sho user
USER is "APPS"
SQL> EXEC FND_CONC_CLONE.SETUP_CLEAN;
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
This will delete all the entries in the fnd_nodes table, to populate it with target system node
information, Run autoconfig on DB node and Applications node.
13. What is the location of adpreclone.pl for oracle user?
Ans : RDBMS_ORACLE_HOME/appsutil/scripts/
14. What is the location of adcfgclone.pl for oracle user?
Ans : $RDBMS_ORACLE_HOME/appsutil/clone/bin
225
15. What is the location of adpreclone.pl for applmgr user?
Ans : $COMMON_TOP/admin/scripts/
16. What is the location of adcfgclone.pl for applmgr user?
Ans : $COMMON_TOP/clone/bin
17. How do we find adpreclone is run in source or not ?
Ans : If clone directory exists under RDBMS_ORACLE_HOME/appsutil for oracle user and
$COMMON_TOP for applmgr user.
18. How many clonings u have done?
19. What are the clone errors, you have encountered?
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
CLONNING OF ASM DATABASE STEPS
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
226
******
Manual Oracle 11gR2 ASM database cloning
Posted on September 21, 2010 by justin
This article is about cloning an ASM database to another ASM database without using rman.
This is purly experimental not recomanded by oracle. The oracle version I tried is 11.2.0.1.0 with
no patch installed
I am using old way of backing up the data files by putting the tablespaces in backup mode. Put
each tablespace into back mode and copy the files to local file system using asmcmd cp
command
Take the tablespaces into backup mode
login as sys to the database and issue begin backup for each tablespace.
alter tablespace my_ts begin backup;
exit;
Copy the ASM file to local file system
asmcmd
ASMCMD>cp +DATA/ORCL/DATAFILE/MY_TS.266.727878895
/local/mnt/bk/datafile/MY_TS.dbf
ASMCMD>cp +DATA/ORCL/DATAFILE/SYSAUX.257.727526219
/local/mnt/bk/datafile/SYSAUX.dbf
ASMCMD>cp +DATA/ORCL/DATAFILE/SYSTEM.256.727526219
/local/mnt/bk/datafile/SYSTEM.dbf
227
ASMCMD>cp +DATA/ORCL/DATAFILE/USERS.259.727526219 /local/mnt/bk/datafile/USERS.dbf
ASMCMD>cp +DATA/ORCL/DATAFILE/UNDOTBS1.258.727526219
/local/mnt/bk/datafile/UNDOTBS1.dbf
scp the datafile to the target machine or remote host
cd /local/mnt/rman/
scp -r -p *.dbf user@targetmachine:/local/mnt/bk/
Generate a control file trace
We need a controlfile trace to create a create controlfile script.
login to soruce database as sys
alter database backup controlfile to trace as ‘/tmp/cr_control_file.sql’;
Edit the /tmp/cr_control_file.sql to looks simlar to the following one.
STARTUP NOMOUNT;
CREATE CONTROLFILE SET DATABASE “TRDB” RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
228
GROUP 1 (
‘+DATA1/trdb/onlinelog/log1a.dbf’,
‘+FRA1/trdb/onlinelog/log1b.dbf’
) SIZE 50M BLOCKSIZE 512,
GROUP 2 (
‘+DATA1/trdb/onlinelog/log2a.dbf’,
‘+FRA1/trdb/onlinelog/log2b.dbf’
) SIZE 50M BLOCKSIZE 512,
GROUP 3 (
‘+DATA1/trdb/onlinelog/log3a.dbf’,
‘+FRA1/trdb/onlinelog/log3b.dbf’
) SIZE 50M BLOCKSIZE 512
DATAFILE
‘+DATA1/trdb/datafile/system.dbf’,
‘+DATA1/trdb/datafile/sysaux.dbf’,
‘+DATA1/trdb/datafile/undotbs1.dbf’,
‘+DATA1/trdb/datafile/users.dbf’,
‘+DATA1/trdb/datafile/my_ts.dbf’
CHARACTER SET WE8MSWIN1252
;
Once all datafiles are copied, we need to get the online logs or archive logs to recover our
database at the target site.
Cop online logs
Go to the alert log of the source database and find out which one is the current online log. and
229
then copy the online log to the local file system using above method.
Once all fiel are on the local file system we can scp the file to the destination/target machine.
Copy the datafiles to the target ASM disk
Now we need to copy the local file to ASM disk using asmcmd cp command.
asmcmd
ASMCMD> cp /opt/mis/rman/datafile/MY_TS.dbf
+DATA/TRDB/DATAFILE/MY_TS.266.727878895
Perform this for each datafile. Next step is to copy the current logfile to the ASM disk. This can
be a temporary directory on ASM disk.
So I created +DATA/tmp directory using asmcmd mkdir command
asmcmd
ASMCMD>mkdir +DATA/tmp
ASMCMD>cp /opt/mis/rman/datafile/currentolfile.dbf +DATA1/tmp/currentolfile.dbf
Prepare init.ora file for TRDB
Copy the init.ora file from soruce database to the target oracle home dbs directory and edit to
look simllar to the following one.
*.audit_file_dest=’/local/mnt/oracle/admin/TRDB/adump’
*.audit_trail=’db’
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA1/trdb/controlfile/control01.dbf’,'+FRA1/trdb/controlfile/control02.dbf’
230
*.db_block_size=8192
*.db_create_file_dest=’+DATA1′
*.db_domain=”
*.db_name=’TRDB’
*.db_recovery_file_dest=’+FRA1′
*.db_recovery_file_dest_size=4070572032
*.diagnostic_dest=’/local/mnt/admin/TRDB’
*.dispatchers=’(PROTOCOL=TCP) (SERVICE=ORCLXDB)’
*.open_cursors=300
*.pga_aggregate_target=169869312
*.processes=150
*.remote_login_passwordfile=’EXCLUSIVE’
*.sga_target=509607936
*.undo_tablespace=’UNDOTBS1′
Recover the database
We are ready to recover the database using the create controlfile script generated at source
site.
Make sure following directories exist in ASM
+DATA1/trdb/onlinelog/
+FRA1//trdb/datafile/
+DATA1/trdb/controlfile/
+FRA1/trdb/controlfile/
+DATA1/tmp/
231
login as sys
TRDB:linux:oracle$ si
SQL*Plus: Release 11.2.0.1.0 Production on Wed Sep 29 16:21:06 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> @/tmp/cr_control_file.sql
ORACLE instance started.
Total System Global Area 509411328 bytes
Fixed Size 2214816 bytes
Variable Size 159384672 bytes
Database Buffers 339738624 bytes
Redo Buffers 8073216 bytes
Control file created.
SQL>
SQL> recover database using backup controlfile until cancel;
ORA-00279: change 6065397 generated at 09/29/2010 14:58:36 needed for thread 1
232
ORA-00289: suggestion : +FRA1
ORA-00280: change 6065397 for thread 1 is in sequence #278
Specify log: {=suggested | filename | AUTO | CANCEL}
+DATA1/trdb/bklog/logfile2.dbf
Log applied.
Media recovery complete.
SQL> alter database open resetlogs;
Database altered.
SQL> @add_tempfile.sql
We are done!. So ASM files can be copied to local file system and moved between machines.
******************************************************************************
******************************************************************************
******
******************************************************************************
******************************************************************************
******
GENERAL BACKUP AND RECOVERY QUESTIONS
******************************************************************************
******************************************************************************
233
******
******************************************************************************
******************************************************************************
******
General Backup and Recovery questions
[edit]Why and when should I backup my database?
Backup and recovery is one of the most important aspects of a DBA's job. If you lose your
company's data, you could very well lose your job. Hardware and software can always be
replaced, but your data may be irreplaceable!
Normally one would schedule a hierarchy of daily, weekly and monthly backups, however
consult with your users before deciding on a backup schedule. Backup frequency normally
depends on the following factors:
* Rate of data change/ transaction rate
* Database availability/ Can you shutdown for cold backups?
* Criticality of the data/ Value of the data to the company
* Read-only tablespace needs backing up just once right after you make it read-only
* If you are running in archivelog mode you can backup parts of a database over an extended
cycle of days
* If archive logging is enabled one needs to backup archived log files timeously to prevent
database freezes
* Etc.
Carefully plan backup retention periods. Ensure enough backup media (tapes) are available and
that old backups are expired in-time to make media available for new backups. Off-site vaulting
234
is also highly recommended.
Frequently test your ability to recover and document all possible scenarios. Remember, it's the
little things that will get you. Most failed recoveries are a result of organizational errors and
miscommunication.
[edit]What strategies are available for backing-up an Oracle database?
The following methods are valid for backing-up an Oracle database:
* Export/Import - Exports are "logical" database backups in that they extract logical
definitions and data from the database to a file. See the Import/ Export FAQ for more details.
* Cold or Off-line Backups - shut the database down and backup up ALL data, log, and control
files.
* Hot or On-line Backups - If the database is available and in ARCHIVELOG mode, set the
tablespaces into backup mode and backup their files. Also remember to backup the control files
and archived redo log files.
* RMAN Backups - while the database is off-line or on-line, use the "rman" utility to backup
the database.
It is advisable to use more than one of these methods to backup your database. For example, if
you choose to do on-line database backups, also cover yourself by doing database exports. Also
test ALL backup and recovery scenarios carefully. It is better to be safe than sorry.
Regardless of your strategy, also remember to backup all required software libraries, parameter
files, password files, etc. If your database is in ARCHIVELOG mode, you also need to backup
archived log files.
[edit]What is the difference between online and offline backups?
A hot (or on-line) backup is a backup performed while the database is open and available for
use (read and write activity). Except for Oracle exports, one can only do on-line backups when
235
the database is ARCHIVELOG mode.
A cold (or off-line) backup is a backup performed while the database is off-line and unavailable
to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or
NOARCHIVELOG mode.
It is easier to restore from off-line backups as no recovery (from archived logs) would be
required to make the database consistent. Nevertheless, on-line backups are less disruptive and
don't require database downtime.
Point-in-time recovery (regardless if you do on-line or off-line backups) is only available when
the database is in ARCHIVELOG mode.
[edit]What is the difference between restoring and recovering?
Restoring involves copying backup files from secondary storage (backup media) to disk. This can
be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can rollforward until a specific point-in-time (before the disaster occurred), or roll-forward until the last
transaction recorded in the log files.
SQL> connect SYS as SYSDBA
SQL> RECOVER DATABASE UNTIL TIME '2001-03-06:16:00:00' USING BACKUP CONTROLFILE;
RMAN> run {
set until time to_date('04-Aug-2004 00:00:00', 'DD-MON-YYYY HH24:MI:SS');
restore database;
recover database;
}
[edit]My database is down and I cannot restore. What now?
This is probably not the appropriate time to be sarcastic, but, recovery without backups are not
supported. You know that you should have tested your recovery strategy, and that you should
236
always backup a corrupted database before attempting to restore/recover it.
Nevertheless, Oracle Consulting can sometimes extract data from an offline database using a
utility called DUL (Disk UnLoad - Life is DUL without it!). This utility reads data in the data files
and unloads it into SQL*Loader or export dump files. Hopefully you'll then be able to load the
data into a working database.
Note that DUL does not care about rollback segments, corrupted blocks, etc, and can thus not
guarantee that the data is not logically corrupt. It is intended as an absolute last resort and will
most likely cost your company a lot of money!
DUDE (Database Unloading by Data Extraction) is another non-Oracle utility that can be used to
extract data from a dead database. More info about DUDE is available at
http://www.ora600.nl/.
[edit]How does one backup a database using the export utility?
Oracle exports are "logical" database backups (not physical) as they extract data and logical
definitions from the database into a file. Other backup strategies normally back-up the physical
data files.
One of the advantages of exports is that one can selectively re-import tables, however one
cannot roll-forward from an restored export. To completely restore a database from an export
file one practically needs to recreate the entire database.
Always do full system level exports (FULL=YES). Full exports include more information about the
database in the export file than user level exports. For more information about the Oracle
export and import utilities, see the Import/ Export FAQ.
[edit]How does one put a database into ARCHIVELOG mode?
The main reason for running in archivelog mode is that one can provide 24-hour availability and
guarantee complete data recoverability. It is also necessary to enable ARCHIVELOG mode
before one can start to use on-line database backups.
Issue the following commands to put a database into ARCHIVELOG mode:
SQL> CONNECT sys AS SYSDBA
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> ALTER DATABASE ARCHIVELOG;
237
SQL> ARCHIVE LOG START;
SQL> ALTER DATABASE OPEN;
Alternatively, add the above commands into your database's startup command script, and
bounce the database.
The following parameters needs to be set for databases in ARCHIVELOG mode:
log_archive_start
log_archive_dest_1
= TRUE
= 'LOCATION=/arch_dir_name'
log_archive_dest_state_1 = ENABLE
log_archive_format
= %d_%t_%s.arc
NOTE 1: Remember to take a baseline database backup right after enabling archivelog mode.
Without it one would not be able to recover. Also, implement an archivelog backup to prevent
the archive log directory from filling-up.
NOTE 2:' ARCHIVELOG mode was introduced with Oracle 6, and is essential for database pointin-time recovery. Archiving can be used in combination with on-line and off-line database
backups.
NOTE 3: You may want to set the following INIT.ORA parameters when enabling ARCHIVELOG
mode: log_archive_start=TRUE, log_archive_dest=..., and log_archive_format=...
NOTE 4: You can change the archive log destination of a database on-line with the ARCHIVE LOG
START TO 'directory'; statement. This statement is often used to switch archiving between a set
of directories.
NOTE 5: When running Oracle Real Application Clusters (RAC), you need to shut down all nodes
before changing the database to ARCHIVELOG mode. See the RAC FAQ for more details.
[edit]I've lost an archived/online REDO LOG file, can I get my DB back?
The following INIT.ORA/SPFILE parameter can be used if your current redologs are corrupted or
blown away. It may also be handy if you do database recovery and one of the archived log files
are missing and cannot be restored.
238
NOTE: Caution is advised when enabling this parameter as you might end-up losing your entire
database. Please contact Oracle Support before using it.
_allow_resetlogs_corruption = true
This should allow you to open the database. However, after using this parameter your database
will be inconsistent (some committed transactions may be lost or partially applied).
Steps:
* Do a "SHUTDOWN NORMAL" of the database
* Set the above parameter
* Do a "STARTUP MOUNT" and "ALTER DATABASE OPEN RESETLOGS;"
* If the database asks for recovery, use an UNTIL CANCEL type recovery and apply all available
archive and on-line redo logs, then issue CANCEL and reissue the "ALTER DATABASE OPEN
RESETLOGS;" command.
* Wait a couple of minutes for Oracle to sort itself out
* Do a "SHUTDOWN NORMAL"
* Remove the above parameter!
* Do a database "STARTUP" and check your ALERT.LOG file for errors.
* Extract the data and rebuild the entire database
[edit]User managed backup and recovery
This section deals with user managed, or non-RMAN backups.
[edit]How does one do off-line database backups?
Shut down the database from sqlplus or server manager. Backup all files to secondary storage
(eg. tapes). Ensure that you backup all data files, all control files and all log files. When
completed, restart your database.
239
Do the following queries to get a list of all files that needs to be backed up:
select name from sys.v_$datafile;
select member from sys.v_$logfile;
select name from sys.v_$controlfile;
Sometimes Oracle takes forever to shutdown with the "immediate" option. As workaround to
this problem, shutdown using these commands:
alter system checkpoint;
shutdown abort
startup restrict
shutdown immediate
Note that if your database is in ARCHIVELOG mode, one can still use archived log files to roll
forward from an off-line backup. If you cannot take your database down for a cold (off-line)
backup at a convenient time, switch your database into ARCHIVELOG mode and perform hot
(on-line) backups.
[edit]How does one do on-line database backups?
Each tablespace that needs to be backed-up must be switched into backup mode before
copying the files out to secondary storage (tapes). Look at this simple example.
ALTER TABLESPACE xyz BEGIN BACKUP;
! cp xyzFile1 /backupDir/
ALTER TABLESPACE xyz END BACKUP;
It is better to backup tablespace for tablespace than to put all tablespaces in backup mode.
240
Backing them up separately incurs less overhead. When done, remember to backup your
control files. Look at this example:
ALTER SYSTEM SWITCH LOGFILE; -- Force log switch to update control file headers
ALTER DATABASE BACKUP CONTROLFILE TO '/backupDir/control.dbf';
NOTE: Do not run on-line backups during peak processing periods. Oracle will write complete
database blocks instead of the normal deltas to redo log files while in backup mode. This will
lead to excessive database archiving and even database freezes.
[edit]My database was terminated while in BACKUP MODE, do I need to recover?
If a database was terminated while one of its tablespaces was in BACKUP MODE (ALTER
TABLESPACE xyz BEGIN BACKUP;), it will tell you that media recovery is required when you try
to restart the database. The DBA is then required to recover the database and apply all archived
logs to the database. However, from Oracle 7.2, one can simply take the individual datafiles out
of backup mode and restart the database.
ALTER DATABASE DATAFILE '/path/filename' END BACKUP;
One can select from V$BACKUP to see which datafiles are in backup mode. This normally saves
a significant amount of database down time. See script end_backup2.sql in the Scripts section of
this site.
From Oracle9i onwards, the following command can be used to take all of the datafiles out of
hotbackup mode:
ALTER DATABASE END BACKUP;
This command must be issued when the database is mounted, but not yet opened.
[edit]Does Oracle write to data files in begin/hot backup mode?
When a tablespace is in backup mode, Oracle will stop updating its file headers, but will
241
continue to write to the data files.
When in backup mode, Oracle will write complete changed blocks to the redo log files. Normally
only deltas (change vectors) are logged to the redo logs. This is done to enable reconstruction
of a block if only half of it was backed up (split blocks). Because of this, one should notice
increased log activity and archiving during on-line backups.
To solve this problem, simply switch to RMAN backups.
[edit]RMAN backup and recovery
This section deals with RMAN backups:
[edit]What is RMAN and how does one use it?
Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and
recovering Oracle Databases. RMAN ships with the database server and doesn't require a
separate installation. The RMAN executable is located in your ORACLE_HOME/bin directory.
In fact RMAN, is just a Pro*C application that translates commands to a PL/SQL interface. The
PL/SQL calls are statically linked into the Oracle kernel, and does not require the database to be
opened (mapped from the ?/rdbms/admin/recover.bsq file).
RMAN can do off-line and on-line database backups. It cannot, however, write directly to tape,
but various 3rd-party tools (like Veritas, Omiback, etc) can integrate with RMAN to handle tape
library management.
RMAN can be operated from Oracle Enterprise Manager, or from command line. Here are the
command line arguments:
Argument
Value
Description
----------------------------------------------------------------------------target
quoted-string connect-string for target database
catalog
quoted-string connect-string for recovery catalog
nocatalog none
cmdfile
log
if specified, then no recovery catalog
quoted-string name of input command file
quoted-string name of output message log file
242
trace
append
quoted-string name of output debugging message log file
none
if specified, log is opened in append mode
debug
optional-args activate debugging
msgno
none
show RMAN-nnnn prefix for all messages
send
quoted-string send a command to the media manager
pipe
string
timeout
integer
building block for pipe names
number of seconds to wait for pipe input
-----------------------------------------------------------------------------
Here is an example:
[oracle@localhost oracle]$ rman
Recovery Manager: Release 10.1.0.2.0 - Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
RMAN> connect target;
connected to target database: ORCL (DBID=1058957020)
RMAN> backup database;
...
[edit]How does one backup and restore a database using RMAN?
The biggest advantage of RMAN is that it only backup used space in the database. RMAN
doesn't put tablespaces in backup mode, saving on redo generation overhead. RMAN will re243
read database blocks until it gets a consistent image of it. Look at this simple backup example.
rman target sys/*** nocatalog
run {
allocate channel t1 type disk;
backup
format '/app/oracle/backup/%d_t%t_s%s_p%p'
(database);
release channel t1;
}
Example RMAN restore:
rman target sys/*** nocatalog
run {
allocate channel t1 type disk;
# set until time 'Aug 07 2000 :51';
restore tablespace users;
recover tablespace users;
release channel t1;
}
The examples above are extremely simplistic and only useful for illustrating basic concepts. By
default Oracle uses the database controlfiles to store information about backups. Normally one
would rather setup a RMAN catalog database to store RMAN metadata in. Read the Oracle
244
Backup and Recovery Guide before implementing any RMAN backups.
Note: RMAN cannot write image copies directly to tape. One needs to use a third-party media
manager that integrates with RMAN to backup directly to tape. Alternatively one can backup to
disk and then manually copy the backups to tape.
[edit]How does one backup and restore archived log files?
One can backup archived log files using RMAN or any operating system backup utility.
Remember to delete files after backing them up to prevent the archive log directory from filling
up. If the archive log directory becomes full, your database will hang! Look at this simple RMAN
backup scripts:
RMAN> run {
2> allocate channel dev1 type disk;
3> backup
4> format '/app/oracle/archback/log_%t_%sp%p'
5> (archivelog all delete input);
6> release channel dev1;
7> }
The "delete input" clause will delete the archived logs as they are backed-up.
List all archivelog backups for the past 24 hours:
RMAN> LIST BACKUP OF ARCHIVELOG FROM TIME 'sysdate-1';
Here is a restore example:
RMAN> run {
245
2> allocate channel dev1 type disk;
3> restore (archivelog low logseq 78311 high logseq 78340 thread 1 all);
4> release channel dev1;
5> }
[edit]How does one create a RMAN recovery catalog?
Start by creating a database schema (usually called rman). Assign an appropriate tablespace to
it and grant it the recovery_catalog_owner role. Look at this example:
sqlplus sys
SQL> create user rman identified by rman;
SQL> alter user rman default tablespace tools temporary tablespace temp;
SQL> alter user rman quota unlimited on tools;
SQL> grant connect, resource, recovery_catalog_owner to rman;
SQL> exit;
Next, log in to rman and create the catalog schema. Prior to Oracle 8i this was done by running
the catrman.sql script.
rman catalog rman/rman
RMAN> create catalog tablespace tools;
RMAN> exit;
You can now continue by registering your databases in the catalog. Look at this example:
246
rman catalog rman/rman target backdba/backdba
RMAN> register database;
One can also use the "upgrade catalog;" command to upgrade to a new RMAN release, or the
"drop catalog;" command to remove an RMAN catalog. These commands need to be entered
twice to confirm the operation.
[edit]How does one integrate RMAN with third-party Media Managers?
The following Media Management Software Vendors have integrated their media management
software with RMAN (Oracle Recovery Manager):
* Veritas NetBackup - http://www.veritas.com/
* EMC Data Manager (EDM) - http://www.emc.com/
* HP OMNIBack/ DataProtector - http://www.hp.com/
* IBM's Tivoli Storage Manager (formerly ADSM) - http://www.tivoli.com/storage/
* EMC Networker - http://www.emc.com/
* BrightStor ARCserve Backup - http://www.ca.com/us/data-loss-prevention.aspx
* Sterling Software's SAMS:Alexandria (formerly from Spectralogic) http://www.sterling.com/sams/
* SUN's Solstice Backup - http://www.sun.com/software/whitepapers/backup-n-storage/
* CommVault Galaxy - http://www.commvault.com/
* etc...
The above Media Management Vendors will provide first line technical support (and installation
guides) for their respective products.
A complete list of supported Media Management Vendors can be found at:
http://www.oracle.com/technology/deploy/availability/htdocs/bsp.htm
When allocating channels one can specify Media Management spesific parameters. Here are
247
some examples:
Netbackup on Solaris:
allocate channel t1 type 'SBT_TAPE'
PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so.1';
Netbackup on Windows:
allocate channel t1 type 'SBT_TAPE' send "NB_ORA_CLIENT=client_machine_name";
Omniback/ DataProtector on HP-UX:
allocate channel t1 type 'SBT_TAPE' PARMS='SBT_LIBRARY= /opt/omni/lib/libob2oracle8_
64bit.sl';
or:
allocate channel 'dev_1' type 'sbt_tape' parms
'ENV=OB2BARTYPE=Oracle8,OB2APPNAME=orcl,OB2BARLIST=machinename_orcl_archlogs)';
[edit]How does one clone/duplicate a database with RMAN?
The first step to clone or duplicate a database with RMAN is to create a new INIT.ORA and
password file (use the orapwd utility) on the machine you need to clone the database to.
Review all parameters and make the required changed. For example, set the DB_NAME
parameter to the new database's name.
Secondly, you need to change your environment variables, and do a STARTUP NOMOUNT from
sqlplus. This database is referred to as the AUXILIARY in the script below.
248
Lastly, write a RMAN script like this to do the cloning, and call it with "rman cmdfile dupdb.rcv":
connect target sys/secure@origdb
connect catalog rman/rman@catdb
connect auxiliary /
run {
set newname for datafile 1 to '/ORADATA/u01/system01.dbf';
set newname for datafile 2 to '/ORADATA/u02/undotbs01.dbf';
set newname for datafile 3 to '/ORADATA/u03/users01.dbf';
set newname for datafile 4 to '/ORADATA/u03/indx01.dbf';
set newname for datafile 5 to '/ORADATA/u02/example01.dbf';
allocate auxiliary channel dupdb1 type disk;
set until sequence 2 thread 1;
duplicate target database to dupdb
logfile
GROUP 1 ('/ORADATA/u02/redo01.log') SIZE 200k REUSE,
GROUP 2 ('/ORADATA/u03/redo02.log') SIZE 200k REUSE;
}
The above script will connect to the "target" (database that will be cloned), the recovery catalog
(to get backup info), and the auxiliary database (new duplicate DB). Previous backups will be
restored and the database recovered to the "set until time" specified in the script.
249
Notes: the "set newname" commands are only required if your datafile names will different
from the target database.
The newly cloned DB will have its own unique DBID.
[edit]Can one restore RMAN backups without a CONTROLFILE and RECOVERY CATALOG?
Details of RMAN backups are stored in the database control files and optionally a Recovery
Catalog. If both these are gone, RMAN cannot restore the database. In such a situation one
must extract a control file (or other files) from the backup pieces written out when the last
backup was taken. Let's look at an example:
Let's take a backup (partial in our case for ilustrative purposes):
$ rman target / nocatalog
Recovery Manager: Release 10.1.0.2.0 - 64bit Production
Copyright (c) 1995, 2004, Oracle. All rights reserved.
connected to target database: ORCL (DBID=1046662649)
using target database controlfile instead of recovery catalog
RMAN> backup datafile 1;
Starting backup at 20-AUG-04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=146 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/oradata/orcl/system01.dbf
channel ORA_DISK_1: starting piece 1 at 20-AUG-04
250
channel ORA_DISK_1: finished piece 1 at 20-AUG-04
piece handle=
/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T153256_
0lczd9tf_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 20-AUG-04
channel ORA_DISK_1: finished piece 1 at 20-AUG-04
piece handle=
/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T153256_
0lczfrx8_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
Finished backup at 20-AUG-04[/code]
Now, let's destroy one of the control files:
SQL> show parameters CONTROL_FILES
NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------control_files
string
/oradata/orcl/control01.ctl,
/oradata/orcl/control02.ctl,
/oradata/orcl/control03.ctl
251
SQL> shutdown abort;
ORACLE instance shut down.
SQL> ! mv /oradata/orcl/control01.ctl /tmp/control01.ctl</pre>
Now, let's see if we can restore it. First we need to start the databaase in NOMOUNT mode:
SQL> startup NOMOUNT
ORACLE instance started.
Total System Global Area 289406976 bytes
Fixed Size
1301536 bytes
Variable Size
262677472 bytes
Database Buffers
Redo Buffers
25165824 bytes
262144 bytes</pre>
Now, from SQL*Plus, run the following PL/SQL block to restore the file:
DECLARE
v_devtype VARCHAR2(100);
v_done
BOOLEAN;
v_maxPieces NUMBER;
TYPE t_pieceName IS TABLE OF varchar2(255) INDEX BY binary_integer;
v_pieceName t_pieceName;
252
BEGIN
-- Define the backup pieces... (names from the RMAN Log file)
v_pieceName(1) :=
'/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_ncsnf_TAG20040820T153256_
0lczfrx8_.bkp';
v_pieceName(2) :=
'/flash_recovery_area/ORCL/backupset/2004_08_20/o1_mf_nnndf_TAG20040820T153256_
0lczd9tf_.bkp';
v_maxPieces := 2;
-- Allocate a channel... (Use type=>null for DISK, type=>'sbt_tape' for TAPE)
v_devtype := DBMS_BACKUP_RESTORE.deviceAllocate(type=>NULL, ident=>'d1');
-- Restore the first Control File...
DBMS_BACKUP_RESTORE.restoreSetDataFile;
-- CFNAME mist be the exact path and filename of a controlfile taht was backed-up
DBMS_BACKUP_RESTORE.restoreControlFileTo
(cfname=>'/app/oracle/oradata/orcl/control01.ctl');
dbms_output.put_line('Start restoring '||v_maxPieces||' pieces.');
FOR i IN 1..v_maxPieces LOOP
dbms_output.put_line('Restoring from piece '||v_pieceName(i));
DBMS_BACKUP_RESTORE.restoreBackupPiece(handle=>v_pieceName(i), done=>v_done,
params=>null);
exit when v_done;
253
END LOOP;
-- Deallocate the channel...
DBMS_BACKUP_RESTORE.deviceDeAllocate('d1');
EXCEPTION
WHEN OTHERS THEN
DBMS_BACKUP_RESTORE.deviceDeAllocate;
RAISE;
END;
/
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
ASM Interview Questions
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
******************************************************************************
******************************************************************************
**********
254
1) What is ASM?
In Oracle Database 10g/11g there are two types of instances: database and ASM instances. The
ASM instance, which is generally named +ASM, is started with the INSTANCE_TYPE=ASM init.ora
parameter. This parameter, when set, signals the Oracle initialization routine to start an ASM
instance and not a standard database instance. Unlike the standard database instance, the ASM
instance contains no physical files; such as logfiles, controlfiles or datafiles, and only requires a
few init.ora parameters for startup.
Upon startup, an ASM instance will spawn all the basic background processes, plus some new
ones that are specific to the operation of ASM. The STARTUP clauses for ASM instances are
similar to those for database instances. For example, RESTRICT prevents database instances
from connecting to this ASM instance. NOMOUNT starts up an ASM instance without mounting
any disk group. MOUNT option simply mounts all defined diskgroups
For RAC configurations, the ASM SID is +ASMx instance, where x represents the instance
number.
2) What are the key benefits of ASM?
ASM provides filesystem and volume manager capabilities built into the Oracle database kernel.
Withthis capability, ASM simplifies storage management tasks, such as creating/laying out
databases and disk space management. Since ASM allows disk management to be done using
familiar create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make
crucial decisions on provisioning.
The following are some key benefits of ASM:
* ASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize
performance.
* ASM eliminates the need for over provisioning and maximizes storage resource utilization
255
facilitating database consolidation.
* Inherent large file support.
* Performs automatic online redistribution after the incremental addition or removal of
storage capacity.
* Maintains redundant copies of data to provide high availability, or leverages 3rd party RAID
functionality.
* Supports Oracle Database as well as Oracle Real Application Clusters (RAC).
* Capable of leveraging 3rd party multipathing technologies.
* For simplicity and easier migration to ASM, an Oracle database can contain ASM and nonASM files.
* Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
* RMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
* Enterprise Manager Database Control or Grid Control can be used to manage ASM disk and
file activities.
3) Describe about ASM architecture.
Automatic Storage Management (ASM) instance
Instance that manages the diskgroup metadata
Disk Groups
Logcal grouping of disks
Determines file mirroring options
ASM Disks
256
LUNs presented to ASM
ASM Files
Files that are stored in ASM disk groups are called ASM files, this includes database file
Notes:
Many databases can connect as clients to single ASM instances
ASM instance name should only be +ASM only
One diskgroup can serve many databases
4) How does database connects to ASM Instance?
The database communicates with ASM instance using the ASMB (umblicus process) process.
Once the database obtains the necessary extents from extent map, all database IO going
forward is processed through by the database processes, bypassing ASM. Thus we say ASM is
not really in the IO path. So, the question how do we make ASM go faster…..you don’t have to.
4) What init.ora parameters does a user need to configure for ASM instances?
The default parameter settings work perfectly for ASM. The only parameters needed for 11g
ASM:
• PROCESSES*
257
• ASM_DISKSTRING*
• ASM_DISKGROUPS
• INSTANCE_TYPE
5) How does the database interact with the ASM instance and how do I make ASM go faster?
ASM is not in the I/O path so ASM does not impede the database file access. Since the RDBMS
instance is performing raw I/O, the I/O is as fast as possible.
6) Do I need to define the RDBMS FILESYSTEMIO_OPTIONS parameter when I use ASM?
No. The RDBMS does I/O directly to the raw disk devices, the FILESYSTEMIO_OPTIONS
parameter is only for filesystems.
7) Why Oracle recommends two diskgroups?
Oracle recommends two diskgroups to provide a balance of manageability, utilization, and
performance.
8) We have a 16 TB database. I’m curious about the number of disk groups we should use; e.g. 1
large disk group, a couple of disk groups, or otherwise?
For VLDBs you will probably end up with different storage tiers; e.g with some of our large
customers they have Tier1 (RAID10 FC), Tier2 (RAID5 FC), Tier3 (SATA), etc. Each one of these is
mapped to a diskgroup.
10) Would it be better to use BIGFILE tablespaces, or standard tablespaces for ASM?
258
The use of Bigfile tablespaces has no bearing on ASM (or vice versa). In fact most database
object related decisions are transparent to ASM.
11) What is the best LUN size for ASM?
There is no best size! In most cases the storage team will dictate to you based on their
standardized LUN size. The ASM administrator merely has to communicate the ASM Best
Practices and application characteristics to storage folks :
• Need equally sized / performance LUNs
• Minimum of 4 LUNs
• The capacity requirement
• The workload characteristic (random r/w, sequential r/w) & any response time SLA
Using this info , and their standards, the storage folks should build a nice LUN group set for you.
12) In 11g RAC we want to separate ASM admins from DBAs and create different users and
groups. How do we set this up?
For clarification
• Separate Oracle Home for ASM and RDBMS.
• RDBMS instance connects to ASM using OSDBA group of the ASM instance.
Thus, software owner for each RDBMS instance connecting to ASM must be
a member of ASM’s OSDBA group.
• Choose a different OSDBA group for ASM instance (asmdba) than for
RDBMS instance (dba)
• In 11g, ASM administrator has to be member of a separate SYSASM group to
259
separate ASM Admin and DBAs.
13) Can my RDBMS and ASM instances run different versions?
Yes. ASM can be at a higher version or at lower version than its client databases. There’s two
components of compatiblity:
Software compatibility
Diskgroup compatibility attributes:
compatible.asm
compatible.rdbms
14) Where do I run my database listener from; i.e., ASM HOME or DB HOME?
It is recommended to run the listener from the ASM HOME. This is particularly important for
RAC env, since the listener is a node-level resource. In this config, you can create additional
[user] listeners from the database homes as needed.
15) How do I backup my ASM instance?
Not applicable! ASM has no files to backup, as its does not contain controlfile,redo logs etc.
16) When should I use RMAN and when should I use ASMCMD copy?
* RMAN is the recommended and most complete and flexible method to backup and
transport database files in ASM.
260
ASMCMD copy is good for copying single files
• Supports all Oracle file types
• Can be used to instantiate a Data Guard environment
• Does not update the controlfile
• Does not create OMF files
17) I’m going to do add disks to my ASM diskgroup, how long will this rebalance take?
* Rebalance time is heavily driven by the three items:
1) Amount of data currently in the diskgroup
2) IO bandwidth available on the server
3) ASM_POWER_LIMIT or Rebalance Power Level
18) We are migrating to a new storage array. How do I move my ASM database from storage A
to storage B?
Given that the new and old storage are both visible to ASM, simply add the new disks to the
ASM disk group and drop the old disks. ASM rebalance will migrate data online.
Note 428681.1 covers how to move OCR/Voting disks to the new storage array
19) Is it possible to unplug an ASM disk group from one platform and plug into a server on
another platform (for example, from Solaris to Linux)?
No. Cross-platform disk group migration not supported. To move datafiles between endian-ness
platforms, you need to use XTTS, Datapump or Streams.
261
20) How does ASM work with multipathing software?
It works great! Multipathing software is at a layer lower than ASM, and thus is transparent.
You may need to adjust ASM_DISKSTRING to specify only the path to the multipathing pseudo
devices.
21) Is ASM constantly rebalancing to manage “hot spots”?
No…No…Nope!! ASM provides even distribution of extents across all disks in a disk group. Since
each disk will equal number of extents, no single disk will be hotter than another. Thus the
answer NO, ASM does not dynamically move hot spots, because hot spots simply do not
occur in ASM configurations. Rebalance only occurs on storage configuration changes (e.g. add,
drop, or resize disks).
22) What are the file types that ASM support and keep in disk groups?
Control files
Flashback logs
Data Pump dump sets
Data files
DB SPFILE
Data Guard configuration
Temporary data files
262
RMAN backup sets
Change tracking bitmaps
Online redo logs
RMAN data file copies
OCR files
Archive logs
Transport data files
ASM SPFILE
23. List Key benefits of ASM?
* Stripes files rather than logical volumes
* Provides redundancy on a file basis
* Enables online disk reconfiguration and dynamic rebalancing
* Reduces the time significantly to resynchronize a transient failure by tracking changes while
disk is offline
* Provides adjustable rebalancing speed
* Is cluster-aware
* Supports reading from mirrored copy instead of primary copy for extended clusters
* Is automatically installed as part of the Grid Infrastructure
24. What is ASM Striping?
263
ASM can use variable size data extents to support larger files, reduce memory requirements,
and improve performance.
Each data extent resides on an individual disk.
Data extents consist of one or more allocation units.
The data extent size is:
* Equal to AU for the first 20,000 extents (0–19999)
* Equal to 4 × AU for the next 20,000 extents (20000–39999)
* Equal to 16 × AU for extents above 40,000
ASM stripes files using extents with a coarse method for load balancing or a fine method to
reduce latency.
* Coarse-grained striping is always equal to the effective AU size.
* Fine-grained striping is always equal to 128 KB.
26. How many ASM Diskgroups can be created under one ASM Instance?
ASM imposes the following limits:
* 63 disk groups in a storage system
* 10,000 ASM disks in a storage system
264
* Two-terabyte maximum storage for each ASM disk (non-Exadata)
* Four-petabyte maximum storage for each ASM disk (Exadata)
* 40-exabyte maximum storage for each storage system
* 1 million files for each disk group
* ASM file size limits (database limit is 128 TB):
1. External redundancy maximum file size is 140 PB.
2. Normal redundancy maximum file size is 42 PB.
3. High redundancy maximum file size is 15 PB.
27) What is a diskgroup?
A disk group consists of multiple disks and is the fundamental object that ASM manages. Each
disk group contains the metadata that is required for the management of space in the disk
group. The ASM instance manages the metadata about the files in a Disk Group in the same way
that a file system manages metadata about its files. However, the vast majority of I/O
operations do not pass through the ASM instance. In a moment we will look at how file
I/O works with respect to the ASM instance.
1A. Database issues open of a database file
1B. ASM sends the extent map for the file to database instance. Starting with 11g, the RDBMS
only receives first 60 extents the remaining extents in the extent map are paged in on demand,
providing a faster open
2A/2B. Database now reads directly from disk
3A.RDBMS foreground initiates a create tablespace for example
3B. ASM does the allocation for its essentially reserving the allocation units
for the file creation
265
3C. Once allocation phase is done, the extent map is sent to the RDBMS
3D. The RDBMS initialization phase kicks in. In this phase the initializes all
the reserved AUs
3E. If file creation is successful, then the RDBMS commits the file creation
Going forward all I/Os are done by the RDBMS directly.
30) Can my disks in a diskgroup can be varied size? For example one disk is of 100GB and
another disk is of 50GB. If so how does ASM manage the extents?
Yes, disk sizes can be varied, Oracle ASM will manage data efficiently and intelligent by placing
the extents proportional to the size of the disk in the disk group, bigger diskgroups have more
extents than lesser ones.
Top 10 ASM Questions
Q.1) What init.ora parameters does a user need to
configure for ASM instances?
A. The default parameter settings work perfectly for
ASM. The only parameters needed for 11g ASM:
• PROCESSES*
• ASM_DISKSTRING*
• ASM_DISKGROUPS
• INSTANCE_TYPE
•ASM is a very passive instance in that it doesn’t have a lot concurrent transactions
or queries. So the memory footprint is quite small.
•Even if you have 20 dbs connected to ASM , the ASM SGA does not need to change.
266
This is b/c the ASM metadata is not directly tied to the number of clients
•The 11g MEMORY_TARGET (DEFAULT VALUE) will be more than sufficient.
•The processes parameter may need to be modified. Use the formula to determine
the approp value:
processes = 40 + (10 + [max number of concurrent database file
creations, and file extend operations possible])*n
Where n is the number of databases connecting to ASM (ASM clients).
The source of concurrent file creations can be any of the following:
•Several concurrent create tablespace commands
•Creation of a Partitioned table with several tablespaces creations
•RMAN backup channels
•Concurrent archive logfile creations
Q.2)How does the database interact with the ASM
instance and how do I make ASM go faster?
A. ASM is not in the I/O path so ASM does not impede
the database file access. Since the RDBMS instance
is performing raw I/O, the I/O is as fast as possible.
•Cover ASM instance architecture
•Cover ASM-Communication via ASMB
•The database communicates with ASM instance using the ASMB
(umblicus process) process. Once the database obtains the necessary
extents from extent map, all database IO going forward is processed
through by the database processes, bypassing ASM. Thus we say
267
ASM is not really in the IO path. So, the question how do we make
ASM go faster…..you don’t have to.
RDBMS and ASM Instance Interaction
Server
Operating System
DATABASE ASM
(1) Database opens file
(1A) OPEN
(1B) Extent Map
(2) Database Reads file
(2A) READ
(2B) I/O Completes
(3) Database Creates file
(3A) CREATE
(3B) Allocates file
(3C) Extent Map
(3D) Initializes file
(3D) Commit
1A. Database issues open of a database file
1B. ASM sends the extent map for the file to database instance. Starting
with 11g, the RDBMS only receives first 60 extents the remaining extents
in the extent map are paged in on demand, providing a faster open
2A/2B. Database now reads directly from disk
268
3A.RDBMS foreground initiates a create tablespace for example
3B. ASM does the allocation for its essentially reserving the allocation units
for the file creation
3C. Once allocation phase is done, the extent map is sent to the RDBMS
3D. The RDBMS initialization phase kicks in. In this phase the initializes all
the reserved AUs
3E. If file creation is successful, then the RDBMS commits the file creation
Going forward all I/Os are done by the RDBMS.
Q.3) Do I need to define the RDBMS
FILESYSTEMIO_OPTIONS parameter when I use ASM?
A. No. The RDBMS does I/O directly to the raw disk
devices, the FILESYSTEMIO_OPTIONS parameter is
only for filesystems.
A. Review what the use of FILESYSTEMIO_OPTIONS parameter;
essentially FILESYSTEMIO_OPTIONS is used for filesystem/block
storage.
This parameter controls which IO options are used. The value may be any of
the following:
*asynch - This allows asynchronous IO to be used where supported by the
OS.
*directIO - This allows directIO to be used where supported by the OS.
Direct IO bypasses any Unix buffer cache. *setall - Enables both
ASYNC and DIRECT IO. "none" - This disables ASYNC IO and DIRECT
269
IO so that Oracle uses normal synchronous writes, without any direct io
options.
A. RDBMS does raw IO against the ASM disks, so need for
FILESYSTEMIO_OPTIONS parameter. The only parameter that needs to
be set is disk_asyncio=true, which is true by default. If using ASMLIB
then even the disk_async does not need to be set.
ASM is also supported for NFS files as ASM disks. In such cases, the
required NFS mount options eliminate the need to set
FILESYSTEMIO_OPTIONS.
Q.4) I read in the ASM Best Practices paper that Oracle
recommends two diskgroups. Why?
A. Oracle recommends two diskgroups to provide a
balance of manageability, utilization, and
performance.
To reduce the complexity of managing ASM and its diskgroups, Oracle recommends that
generally no more than two diskgroups be maintained and managed per RAC cluster or
single ASM instance
oDatabase work area: This is where active database files such as datafiles, control files,
online redo logs, and change tracking files used in incremental backups are stored. This
location is indicated by DB_CREATE_FILE_DEST.
oFlash recovery area: Where recovery-related files are created, such as multiplexed copies
of the current control file and online redo logs, archived redo logs, backup sets, and
flashback log files. This location is indicated by DB-RECOVERY_FILE_DEST.
270
•Having one DATA container means only place to store all your database files, and obviates
the need to juggle around datafiles or having to decide where to place a new tablespace.
By having one container for all your files also means better storage utilization. Making the IT
director very happy. If more storage capacity or IO capacity is needed, just add an ASM
disk….all online activities.
You have to ensure that this storage pool container houses enough spindles to
accommodate the IO rate of all the database objects
Bottom line, one container == one pool manage, monitor, and track
Note however, that additional diskgroups may be added to support tiered storage classes in
Information Lifecycle Management (ILM) or Hierarchical Storage Management (HSM)
deployments
Q.5) We have a 16 TB database. I’m curious about the
number of disk groups we should use; e.g. 1 large
disk group, a couple of disk groups, or otherwise?
A. For VLDBs you will probably end up with different
storage tiers; e.g with some of our large customers
they have Tier1 (RAID10 FC), Tier2 (RAID5 FC), Tier3
(SATA), etc. Each one of these is mapped to a
diskgroup.
These custs mapped certain tablespaces to specific tiers; eg, system/rollback/syaux
and latency senstive tablespaces in Tier1, and not as IO critical on Tier2, etc.
For 10g VLDBs its best to set an AU size of 16MB, this is more for metadata space
efficiency than for performance. The 16MB recommendation is only necessary if the
271
diskgroup is going to be used by 10g databases. In 11g we introduced variable size
extents to solve the metadata problem. This requires compatible.rdbms &
compatible.asm to be set to 11.1.0.0. With 11g you should set your AU size to the largest
I/O that you wish to issue for sequential access (other parameters need to be set to
increase the I/O size issued by Oracle). For random small I/Os the AU size does not
matter very much as long as every file is broken into many more extents than there are
disks.
15
Q.6) We have a new app and don’t know our access
pattern, but assuming mostly sequential access, what
size would be a good AU fit?
A. For 11g ASM/RDBMS it is recommended to use 4MB
ASM AU for disk groups.
See Metalink Note 810484.1
For all 11g ASM/DB users, it best to create a disk group using 4 MB ASM AU size. Metalink
Note 810484.1 covers this
Q.7) Would it be better to use BIGFILE tablespaces, or
standard tablespaces for ASM?
A. The use of Bigfile tablespaces has no bearing on ASM
(or vice versa). In fact most database object related
decisions are transparent to ASM.
272
Nevertheless, Bigfile tablespaces benefits:
Fewer datafiles - which means faster database open (fewer files to
open),
Faster checkpoints, as well fewer files to manage. But you'll need
careful consideration for backup/recovery of these large datafiles.
Q.8) What is the best LUN size for ASM
A. There is no best size! In most cases the storage
team will dictate to you based on their standardized
LUN size. The ASM administrator merely has to
communicate the ASM Best Practices and application
characteristics to storage folks :
• Need equally sized / performance LUNs
• Minimum of 4 LUNs
• The capacity requirement
• The workload characteristic (random r/w, sequential r/w) &
any response time SLA
Using this info , and their standards, the storage folks should build a
nice LUN group set for you.
In most cases the storage team will dictate to you what the standardized LUN size
is. This is based on several factors,
including RAID LUN set builds (concatenated, striped, hypers, etc..). Having too
many LUNs elongates boot
time and is it very hard to manage On the flip side, having too few LUNs makes array
273
cache management difficult to
control and creates un-manageable large LUNs (which are difficult to expand).
The ASM adminstrator merely has to communicate to SA/storage folks that you need
equally sized/performance LUNs and what the capacity requirement is, say 10TB.
Using this info, the workload characteristic (random r/w, sequential r/w), and their
standards, the storage folks should build a nice LUN group set for you
Having too many LUNs elongates boot time and is it very hard to manage (zoning,
provisioning, masking, etc..)...there's a $/LUN barometer!
Q.9) In 11g RAC we want to separate ASM admins from
DBAs and create different users and groups. How do
we set this up?
A. For clarification
• Separate Oracle Home for ASM and RDBMS.
• RDBMS instance connects to ASM using OSDBA group of the ASM instance.
Thus, software owner for each RDBMS instance connecting to ASM must be
a member of ASM's OSDBA group.
• Choose a different OSDBA group for ASM instance (asmdba) than for
RDBMS instance (dba)
• In 11g, ASM administrator has to be member of a separate SYSASM group to
separate ASM Admin and DBAs.
Operating system authentication using membership in the group or groups
designated
as OSDBA, OSOPER, and OSASM is valid on all Oracle platforms.
274
A typical deployment could be as follows:
ASM administrator:
User : asm
Group: oinstall, asmdba(OSDBA), asmadmin(OSASM)
Database administrator:
User : oracle
Group: oinstall, asmdba(OSDBA of ASM), dba(OSDBA)
ASM disk ownership : asm:oinstall
Remember that Database instance connects to ASM instance as
sysdba. The user id the database instance runs as needs to be the
OSDBA group of the ASM instance.
A typical deployment could be as follows:
ASM administrator:
User : asm
Group: oinstall, asmdba(OSDBA), asmadmin(OSASM)
Database administrator:
User : oracle
Group: oinstall, asmdba(OSDBA of ASM), dba(OSDBA)
A typical deployment could be as follows:
ASM administrator:
User : asm
Group: oinstall, asmdba(OSDBA), asmadmin(OSASM)
Database administrator:
275
User : oracle
Group: oinstall, asmdba(OSDBA of ASM), dba(OSDBA)
Q.10) Can my RDBMS and ASM instances run different
versions?
A. Yes. ASM can be at a higher version or at lower
version than its client databases. There’s two
components of compatiblity:
• Software compatibility
• Diskgroup compatibility attributes:
• compatible.asm
• compatible.rdbms
This is a diskgroup level change and not an instance level change…no
rolling upgrade here!
Disk Group Compatibility Example
• Start with 10g ASM and RDBMS
• Upgrade ASM to 11g
• Advance compatible.asm
• ALTER DISKGROUP data
SET ATTRIBUTE
‘compatible.asm’ = ’11.1.0.7.0’
• 10g RDBMS instances are still supported
• 10g ASM instance can no longer mount the disk
276
group
22
Disk Group Compatibility Example
• Upgrade RDBMS to 11g
• In the RDBMS instance set initialization parameter
• compatible = 11.0
• Advance compatible.rdbms
• ALTER DISKGROUP data
SET ATTRIBUTE
‘compatible.rdbms’ = ’11.1.0.7.0’
• New capabilities enabled
• Variable size extents
• Fast mirror resync
• Preferred read
• AUs > 16MB
• 10g RDBMS instances can no longer access the disk group
23
Disk Group Compatibility Example
• Compatibility may be set during disk group creation
• CREATE DISKGROUP data
DISK ‘/dev/sdd[bcd]1’
ATTRIBUTE
‘compatible.asm’ = ’11.1.0.7.0’,
‘compatible.rdbms’ = ’11.1.0.7.0’,
277
‘au_size’ = ’4M’
•compatible.asm and compatible.rdbms cannot
be reversed
Q.11) Where do I run my database listener from; i.e., ASM
HOME or DB HOME?
A. It is recommended to run the listener from the ASM
HOME. This is particularly important for RAC env,
since the listener is a node-level resource. In this
config, you can create additional [user] listeners from
the database homes as needed.
- Allows registering multiple databases on the node to register with the
listener without being tied to a specific database home
- From configuration tool standpoint (netca), we promote best practice
of creating one listener per node with node name suffix (that is
registered with CRS) and subsequent tools that create/upgrade
databases will register instances to that listener. One can always
create multiple listeners in different homes and use'em but that would
complicate the configuration
Backups for ASM:
Top 10 ASM Questions
Q.1) How do I backup my ASM instance?
A. Not applicable! ASM has no files to backup
278
Unlike the database, ASM does require a controlfile type structure or
any other external metadata to bootstrap itself. All the data ASM needs
to startup is on on-disk structures (disk headers and other disk group
metadata).
A Disk Group is the fundamental object managed by ASM. It is
composed of multiple ASM disks. Each Disk Group is self-describing,
like a standard file system. All the metadata about the usage of the
space in the disk group is completely contained within the disk group.
If ASM can find all the disks in a disk group it can provide access to
the disk group without any additional metadata
Q.2)When should I use RMAN and when should I use
ASMCMD copy?
A. RMAN is the recommended and most complete and
flexible method to backup and transport database files
in ASM.
ASMCMD copy is good for copying single files
• Supports all Oracle file types
• Can be used to instantiate a Data Guard environment
• Does not update the controlfile
• Does not create OMF files
RMAN is the most complete and versatile means to backup databases
stored in ASM.
However, many customers use BCV/split-mirrors as backups for ASM
279
based databases. Many combine BCV mirrors with RMAN backup
of the mirrors. Why would you want to do that? RMAN ensures the
integrity of the database data blocks by running sanity checks as it
backs up the blocks
Now most of you are wondering about the 11g asmcmd copy
command, and how that fits in here. asmcmd cp is not intended to
do wholesale backups (plus you’ll have to put the database in hot
backup).
In 10g the possible ways to migrate - DBMS_FILE_TRANSFER, rman (copy vs.
backup), or XMLDB FTP
In 11g, we introduced the asmcmd copy command. Key point here is that copy files
out is great for:
1. archive logs
2. Controlfiles
3. Datafiles for debugging
4. Dumpsets (can be done across platforms)
Copying files in:
TTS
Copy in only supported files.
28
ASMCMD Copy
ASMCMD> ls
+fra/dumpsets/expdp_5_5.dat
ASMCMD> cp expdp_5_5.dat sys@rac1.orcl1:+DATA/dumpsets/ex
280
pdp_5_5.dat
source +fra/dumpsets/expdp_5_5.dat
target +DATA/dumpsets/expdp_5_5.dat
copying file(s)...
file, +DATA/dumpsets/expdp_5_5.dat,
copy committed.
Top 10 ASM Questions
Migration
Top 10 ASM Questions
Q. I’m going to do add disks to my ASM diskgroup,
how long will this rebalance take?
A. Rebalance time is heavily driven by the three items:
• Amount of data currently in the diskgroup
• IO bandwidth available on the server
• ASM_POWER_LIMIT or Rebalance Power Level
Use v$asm_operation
31
Q. We are migrating to a new storage array. How do I
move my ASM database from storage A to storage B?
A. Given that the new and old storage are both visible to
ASM, simply add the new disks to the ASM disk group
281
and drop the old disks. ASM rebalance will migrate
data online. Note 428681.1 covers how to move
OCR/Voting disks to the new storage array
If this is a RAC environment, the Note 428681.1 covers how to move
OCR/Voting disks to the new storage array
ASM_SQL> alter diskgroup DATA
drop disk
data_legacy1, data_legacy2,
data_legacy3
add disk
‘/dev/sddb1’, ‘/dev/sddc1’,
‘/dev/sddd1’;
ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Only move data proportional to storage
added
• No need for manual I/O tuning
Disk Group DATA (legacy disks)
ASM Rebalancing
• Automatic online rebalance whenever
282
storage configuration changes
• Online migration to new storage
Disk Group DATA
ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Online migration to new storage
Disk Group DATA
ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Online migration to new storage
Disk Group DATA
ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Online migration to new storage
Disk Group DATA (new disks)
Top 10 ASM Questions
Q. Is it possible to unplug an ASM disk group from one
283
platform and plug into a server on another platform
(for example, from Solaris to Linux)?
A. No. Cross-platform disk group migration not
supported. To move datafiles between endian-ness
platforms, you need to use XTTS, Datapump or
Streams.
The first problem that you run into here is that Solaris and Linux
format their disks differently. Solaris and Linux do not recognize each
other’s partitions, etc.
ASM does track the endian-ness of its data. However, currently, the
ASM code does not handle disk groups whose endian-ness does not
match that of the ASM binary.
Experiments have been done to show that ASM disk groups can be
migrated from platforms that share a common format and endian-ness
(i.e. Windows to Linux), but this functionality is not currently officially
supported because is not regularly tested yet.
The following links show how to migrate across platforms
http://downloadwest.
oracle.com/docs/cd/B19306_01/server.102/b25159/outage.htm#CA
CFFIDD
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10
gR2_PlatformMigrationTTS.pdf
Top 10 ASM Questions
284
3rd Party Software
40
Top 10 ASM Questions
Q. How does ASM work with multipathing software?
A. It works great! Multipathing software is at a layer
lower than ASM, and thus is transparent.
You may need to adjust ASM_DISKSTRING to specify
only the path to the multipathing pseudo devices.
Multipathing tools provides the following benefits:
oProvide a single block device interface for a multi-pathed LUN
oDetect any component failures in the I/O path; e.g., fabric port, channel adapter, or
HBA.
oWhen a loss of path occurs, ensure that I/Os are re-routed to the available paths,
with no process disruption.
oReconfigure the multipaths automatically when events occur.
oEnsure that failed paths get revalidated as soon as possible and provide autofailback
capabilities.
oConfigure the multi-paths to maximize performance using various load balancing
methods; e.g., round robin, least I/Os queued, or least service time.
When a given disk has several paths defined, each one will be presented as a unique path
name at the OS level; e.g.; /dev/rdsk/c3t19d1s4 and /dev/rdsk/c7t22d1s4 could be pointing
to same disk device. ASM, however, can only tolerate the discovery of one unique device
path per disk. For example, if the asm_diskstring is ‘/dev/rdsk/*’, then several paths to the
same device will be discovered, and ASM will produce an error message stating this.
285
When using a multipath driver, which sits above this SCSI-block layer, the driver will
generally produce a pseudo device that virtualizes the sub-paths. For example, in the
case of EMC’s PowerPath, you can use the following asm_diskstring setting of
‘/dev/rdsk/emcpower*’. When I/O is issued to this disk device, the multipath driver will
intercept it and provide the necessary load balancing to the underlying subpaths.
As long as ASM can open/read/write to the multipathing pseudo device, it should
work. Most all MP products are known to work w/ ASM. But remember ASM does not
certify MP products, though we have a list products that work w/ ASM, this is more of
a guide of what’s available by platform/OS.
Examples of multi-pathing software include EMC PowerPath, Veritas DMP, Sun Traffic
Manager, Hitachi HDLM, and IBM SDDPCM. Linux 2.6 has a kernel based multipathing
ASM/RDBMS
/dev/
Multipath driver
/dev/sda1 /dev/sdb1
Controller Controller
Cache
Disk
IO cloud
Top 10 ASM Questions
Q. Is ASM constantly rebalancing to manage “hot
spots”?
286
A. No…No…Nope!!
Bad rumor. ASM provides even distribution of extents across all disks
in a disk group. Since each disk will equal number of extents, no
single disk will be hotter than another. Thus the answer NO, ASM does
not dynamically move hot spots, because hot spots simply do not
occur in ASM configurations.
Rebalance only occurs on storage configuration changes (e.g. add,
drop, or resize disks).
43
I/O Distribution
• ASM spreads file extents evenly across all disks in disk group
• Since ASM distributes extents evenly, there are no hot spots
Average IOPS per disk during OLTP workload
0
50
100
150
200
250
300
Total
Disk
IOPS
FG1: - cciss/c0d2
287
FG1: - cciss/c0d3
FG1: - cciss/c0d4
FG1: - cciss/c0d5
FG1: - cciss/c0d6
FG2: - cciss/c0d2
FG2: - cciss/c0d3
FG2: - cciss/c0d4
FG2: - cciss/c0d5
FG2: - cciss/c0d6
FG3: - cciss/c0d2
FG3: - cciss/c0d3
FG3: - cciss/c0d4
FG3: - cciss/c0d5
FG3: - cciss/c0d6
FG4: - cciss/c0d2
FG4: - cciss/c0d3
FG4: - cciss/c0d4
FG4: - cciss/c0d5
FG4: - cciss/c0d6
As indicated, ASM implements the policy of S.A.M.E. that stripes and mirrors
files across all the disks in a Disk Group. If the disks are highly reliable as
the case may be with a high-end array, mirroring can be optionally disabled
for a particular Disk Group. This policy of striping and mirroring across all
disks in a disk group greatly simplifies storage management and provides a
288
configuration of balanced performance.
Key Value Propositions
• Manageability
• Simple provisioning
• Storage Array migration
• VM/FS co-existence
• SQL, EM, Command line
• Consolidation
• Self-tuning
• Performance
• Distribute load across all
available storage
• No ASM code in data path
• Availability
• Automatic mirror rebuild
• Automatic bad block correction
• Rolling upgrades
• Online patches
• RAC and clusterware support
• Cost Savings
• Shared storage pool
• Just-in-Time provisioning
• No license fees
289
• No support fees
45
Summary:
• ASM requires very few parameters to run
• ASM based databases inherently leverage raw disk
performance
• No additional database parameters needed to support
ASM
• Mixed ASM-database version support
• Facilitates online storage changes
• RMAN recommended for backing up ASM based
databases
• Spreads I/O evenly across all disks to maximize
performance and eliminates hot spots
ASM provides filesystem and volume manager capabilities built into the Oracle database kernel.
With
this capability, ASM simplifies storage management tasks, such as creating/laying out databases
and
disk space management. Since ASM allows disk management to be done using familiar
create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make crucial
decisions
on provisioning.
The following are some key benefits of ASM:
oASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize
performance.
oASM eliminates the need for over provisioning and maximizes storage resource utilization
290
facilitating
database consolidation.
oInherent large file support.
oPerforms automatic online redistribution after the incremental addition or removal of storage
capacity.
oMaintains redundant copies of data to provide high availability, or leverages 3rd party RAID
functionality.
oSupports Oracle Database as well as Oracle Real Application Clusters (RAC).
oCapable of leveraging 3rd party multipathing technologies.
oFor simplicity and easier migration to ASM, an Oracle database can contain ASM and non-ASM
files.
Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
oRMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
oEnterprise Manager Database Control or Grid Control can be used to manage ASM disk and file
activities.
ASM reduces Oracle Database cost and complexity without compromising performance or
availability
46
ASM Collateral and Content
http://www.oracle.com/technology/asm
• ASM 11g New Features
• ASM Best Practices
• ASM vendor papers
• ASM-RAC Customer Case Studies
291
Top 10 ASM Questions
Extra credit questions
Top 10 ASM Questions
Q. Is ASMLIB required on Linux systems and are there
any benefits to using it?
A. ASMLIB is not required to run ASM, but it is certainly
recommended.
ASMLIB has following benefits:
• Simplified disk discovery
• Persistent disk names
• Efficient use of system resources
o Reduced Overhead
ASMLIB provides the capability for a process (RBAL) to perform a global open/close
on the disks that are being dropped or closed.
This reduces the number of open file descriptors on the system, thus reduces the
probability of running out of global file descriptors on the system. Also, the open
and close operations are reduced, ensuring orderly cleanup of file descriptors when
storage configurations changes occur.
A side benefit of the aforementioned items is a faster startup of the database.
o Disk Management and Discovery
With ASMLib the ASM disk name is automatically taken from the name given it by the
administrative tool. This simplifies adding disks and correlating OS names with ASM
names, as well as eliminates erroneous disk management activities since disks are
292
already pre-named.
The default discovery string for ASM is NULL, however, if ASMLIB is used, the
ASMLIB default string replaces the NULL string, making disk discovery much more
straightforward. Note, disk discovery has been one of the big challenges for
administrators.
The ASMLib permissions are persistent across reboot and in the event of major/minor
number changes
In RAC environments, disk identification and discovery as simply as single instance.
Once the disks are labeled on one node, the other clustered nodes simply use the
default disk discovery string, and discovery is seamless.
o No requirement to setup raw links
With ASMLib, there is not a requirement to modify the initializations (e.g. “/etc/init.d”)
Top 10 ASM Questions
Q. Is it possible to do rolling upgrades on ASMLIB in a
RAC configuration
A. ASMLIB is independent of Oracle Clusterware and
Oracle Database, and thus can be upgraded on its
own
Upgrading ASMLIB one a given node will require that ASMLIB be
disabled/stop, will require the database and ASM to also be
shutdown on that node. Once the ASMLIB is upgarded then the stack can be
restarted.
******************************************************************************
293
******************************************************************************
*********
ARCHITECTURE INTERVIEW QUESTIONS
******************************************************************************
******************************************************************************
*********
1) What is a database?
• Database offer a single point of mechanism for storing and retrieving information with the
help of tables.
• Table is made up of columns and rows where each column stores specific attribute and each
row displays a value for the corresponding attribute.
• It is a structure that stores information about the attributes of the entities and relationships
among them.
• It also stores data types for attributes and indexes.
• Well known DBMS include Oracle, ibm db2, Microsoft sql server, Microsoft access, mysql and
sqlLite.
2) What are the different types of storage systems available and which one is used by Oracle?
Two types of storage systems are available
• Relational Database Management System (RDBMS) and Hierarchical Storage Management
System (HSM)
• Most databases use RDBMS model, Oracle also uses RDBMS model.
• Hierarchical Storage Management System (HSM)
294
• Information Management System (IMS) from IBM.
• Integrated Database Management System (IDMS) from CA.
3) Explain some examples of join methods
• Join methods are of mainly 3 types
• Merge Join – Sorting both the tables using join key and then merge the rows which are sorted.
• Nested loop join – It gets a result set after applying filter conditions based on the outer table.
• Then it joins the inner table with the respective result set.
• Hash join - It uses hash algorithm first on smaller table and then on the other table to produce
joined columns. After that matching rows are returned.
4) What are the components of logical data model and list some differences between logical
and physical data model?
Components of logical data model are
• Entity – Entity refers to an object that we use to store information. It has its own table.
• Attribute – It represents the information of the entity that we are interested in. It is stored as
a column of the table and has specific datatype associated with it.
• Record – It refers to a collection of all the properties associated with an entity for one specific
condition, represented as row in a table.
• Domain – It is the set of all the possible values for a particular attribute.
• Relation – Represents a relation between two entities.
Difference between Logical and Physical data model
• Logical data model represents database in terms of logical objects, such as entities and
relationships.
• Physical data model represents database in terms of physical objects, such as tables and
constraints.
295
5) What is normalization? What are the different forms of normalization?
• Normalization is a process of organizing the fields and tables of a relational database to
minimize redundancy and dependency.
• It saves storage space and ensures consistency of our data.
There are six different normal forms
• First Normal Form – If all underlying domains contain atomic values only.
• Second Normal Form – If it is in first normal form and every non key attribute is fully
functionally dependent on primary key.
• Third Normal Form - If it is in 2nd normal form and every non key attribute is non transitively
dependent on the primary key.
• Boyce Codd Normal Form - A relation R is in BCNF if and only every determinant is a candidate
key.
• Fourth Normal Form
• Fifth Normal Form
6) Differentiate between a database and Instance and explain relation between them?
• Database is a collection of three important files which include data files, control files and redo
log files which physically exist on a disk
• Whereas instance is a combination of oracle background process (SMON, PMON, DBWR,
LGWR) and memory structure (SGA, PGA).
• Oracle background processes running on a computer share same memory area.
• An instance can mount and open only a single database, ever.
• A database may be mounted and opened by one or more instances (using RAC).
296
7) What are the components of SGA?
• SGA is used to store shared information across all database users.
• It mainly includes Library cache, Data Dictionary cache, Database Buffer Cache, Redo log
Buffer cache, Shared Pool.
• Library cache – It is used to store Oracle statements.
• Data Dictionary Cache – It contains the definition of Database objects and privileges granted
to users.
• Data Base buffer cache – It holds copies of data blocks which are frequently accessed, so that
they can be retrieved faster for any future requests.
• Redo log buffer cache – It records all changes made to the data files
8) Difference between SGA and PGA.
• SGA (System Global Area) is a memory area allocated during an instance start up.
• SGA is allocated as 40% of RAM size by default.
• SGA size is controlled by DB_CACHE_SIZE parameter defined in initialization parameter file
(init.ora file or SPFILE).
• PGA (Program or Process Global Area) is a memory area that stores a user session specific
information.
• PGA is allocated as 10% of RAM size by default.
9) What are the disk components in Oracle?
297
These are the physical components which gets stored in the disk.
• Data files
• Redo Log files
• Control files
• Password files
• Parameter files
10) What is System Change Number (SCN)?
• SCN is a unique ID that Oracle generates for every committed transaction.
• It is recorded for every change in the redo entry.
• SCN is also generated for every checkpoint (CKPT) occurred.
• It is an ever increasing number which is updated for every 3 seconds
• You can get the SCN number by querying select SCN from v$database from SQLPLUS.
11) What is Database Writer (DBWR) and when does DBWR write to the data file?
• DBWR is a background process that writes data blocks information from Database buffer
cache to data files.
There are 4 important situations when DBWR writes to data file
• Every 3 seconds
• Whenever checkpoint occurs
• When server process needs free space in database buffer cache to read new blocks.
• Whenever number of changed blocks reaches a maximum value.
12) What is Log Writer and when does LGWR writes to log file?
298
• LGWR writes redo or changed information from redo log buffer cache to redo log files in
database.
• It is responsible for moving redo buffer information to online redo log files, when you commit
and a log switch also occurs.
• LGWR writes to redo files when the redo log buffer is 1/3 rd full.
• It also writes for every 3 seconds.
• Before DBWR writes modified blocks to the datafiles, LGWR writes to the
log file
13) Which Table spaces are created automatically when you create a database?
• SYSTEM tablespace is created automatically during database creation.
• It will be always online when the database is open.
Other Tablespaces include
• SYSAUX tablespace
• UNDO tablespace
• TEMP tablespace
• UNDO & TEMP tablespace are optional when you create a database.
14) Which file is accessed first when Oracle database is started and What is the difference
between SPFILE and PFILE?
• Init<SID>.ora parameter file or SPFILE is accessed first .( SID is instance name)
• Settings required for starting a database are stored as parameters in this file.
• SPFILE is by default created during database creation whereas PFILE should be created from
SPFILE.
299
• PFILE is client side text file whereas SPFILE is server side binary file.
• SPFILE is a binary file (it can’t be opened) whereas PFILE is a text file we can edit it and set
parameter values.
• Changes made in SPFILE are dynamically effected with running database whereas PFILE
changes are effected after bouncing the database.
• We can backup SPFILE using RMAN.
15) What are advantages of using SPFILE over PFILE?
• SPFILE is available from Oracle 9i and above.
• Parameters in SPFILE are changed dynamically.
• You can’t make any changes to PFILE when the database is up.
• RMAN cant backup PFILE, It can backup SPFILE.
• SPFILE parameters changes are checked before they are accepted as it is maintained by Oracle
server thereby reducing the human typo errors.
16) How can you find out if the database is using PFILE or SPFILE?
• You can query Dynamic performance view (v$parameter) to know your database is using
PFILE or SPFILE.
• SQL> select value from V$parameter where name= ‘SPFILE’;
• A non-null value indicates the database is using SPFILE.
• Null value indicates database is using PFILE.
• You can force a database to use a PFILE by issuing a startup command as
300
• SQL> startup PFILE = ‘full path of Pfile location’;
17) Where are parameter files stored and how can you start a database using a specific
parameter file?
• In UNIX they are stored in the location $ORACLE_HOME/dbs and ORACLE_HOME/database for
Windows directory.
• Oracle by default starts with SPFILE located in $ORACLE_HOME/dbs.
• If you want to start the database with specific file we can append it at the startup command
as
SQL > startup PFILE = ‘full path of parameter file ‘;
• You can create PFILE from SPFILE as create PFILE from SPFILE;
• All the parameter values are now updated with SPFILE.
• Similarly, create SPFILE from PFILE; command creates SPFILE from PFILE.
18) What is PGA_AGGREGATE_TARGET parameter?
• PGA_AGGREGATE TARGET parameter specifies target aggregate PGA memory available to all
server process attached to an instance.
• Oracle sets its value to 20% of SGA.
• It is used to set overall size of work-area required by various components.
• Its value can be known by querying v$pgastat dynamic performance view.
• From sqlplus it can be known by using SQL> show parameter pga.
19) What is the purpose of configuring more than one Database Writer Processes? How many
should be used? (On UNIX)
• DBWn process writes modified buffers in Database Buffer Cache to data files, so that user
301
process can always find free buffers.
• To efficiently free the buffer cache to make it available to user processes, you can use
multiple DBWn processes.
• We can configure additional processes (DBW1 through DBW9 and DBWa through DBWj) to
improve write performance if our system modifies data heavily.
• The initialization parameter DB_WRITER_PROCESSES specifies the number of DBWn processes
upto a maximum number of 20.
• If the Unix system being used is capable of asynchronous input/output processing then only
one DBWn process is enough, if not the case the total DBWn processes required will be twice
the number of disks used by oracle, and this can be set with DB_WRITER_PROCESSES
initialization parameter.
20) List out the major installation steps of oracle software on UNIX in brief?
• Set up disk and make sure you have Installation file (run Installer) in your dump.
• Check the swap and TEMP space .
• Export the following environment variables
1) ORACLE_BASE
2) ORACLE_HOME
3) PATH
4) LD_LIBRARY_PATH
5) TNS_ADMIN
• Set up the kernel parameters and file maximum descriptors.
302
• Source the Environment file to the respective bash profile and now run Oracle Universal
Installer.
21) Can we check number of instances running on Oracle server and how to set kernel
parameters in Linux?
• Editing the /etc/oratab file on a server gives the list of oracle instances running on your
server.
• Editing /etc/sysctl.conf file with vi editor will open a text file listing out kernel level
parameters.
• We can make changes to kernel parameters as required for our environment only as a root
user.
• To make the changes affected permanently to kernel run the command /sbin/sysctl –p.
• We must also set file maximum descriptors during oracle installation which can be done by
editing /etc/security/limits.conf as a root user.
22) What is System Activity Reporter (SAR) and SHMMAX?
• SAR is a utility to display resource usage on the UNIX system.
• sar –u shows CPU activity.
• Sar –w shows swapping activity
• Sar –b shows buffer activity
• SHMMAX is the maximum size of a shared memory segment on a Linux system.
23) List out some major environment variable used in installation?
303
• ORACLE_BASE=/u01/app/<installation-directory>
• ORACLE_HOME=$ORACLE_BASE/product/11.2.0(for 11g)/dbhome_1
• ORACLE_SID=<instance-name>
• PATH=$ORACLE_HOME/bin:$PATH
• LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
• TNS_ADMIN=$ORACLE_HOME/network/admin
• These are absolutely critical environment variables in running OUI.
24) What is a control file?
• Control file is a binary file which records the physical structure of a database.
• It includes number of log files and their respective location, Database name and timestamp
when database is created, checkpoint information.
• We find CONTROL_FILE parameter in initialization parameter file which stores information
about control file location.
• We can multiplex control files, store in different locations to make control files available even
if one is corrupted.
• We can also avoid the risk of single point of failure.
25) At what stage of instance, control file information is read and can we recover control file
and how to know information in a control file?
• During database mounting, control file information is read.
• We can’t recover or restore lost control file, but we can still startup the database using
control files created using multiplexing in different locations.
• We can query the following command
SQL> alter database backup controlfile to trace;
• We find a trace file(.trc) in udump location,we can edit it and find the complete database
304
structure.
• Multiplexing can also be done using Following command
SQL> alter database backup controlfile to <Different location/path>.
26) How can you obtain Information about control file?
• Control file information can be shown in initialization parameter file.
• We can query v$controlfile to display the names of control files
• From sql we can execute
SQL> show parameter control_files;
• The above query gives the name, Location of control files in our physical disk.
• We can edit PFILE using a vi editor and control_files parameter gives us information about
number of and location of control files.
27) How do you resize a data file and tablespace?
• Prior to Oracle 7.2 you can’t resize a datafile.
• The solution was to delete the tablespace, recreating it with different sized datafiles.
• After 7.2 you can resize a datafile by using ALTER DATABASE DATAFILE <file_name> RESIZE;
• Resizing Table space includes creation of new data file or resizing existing data file.
• ALTER TABLESPACE <tablespacename> ADD DATAFILE ‘<datafile name> ‘ size M; creates a
new datafile.
28) Name the views used to look at the size of a datafile, controlfiles, block size, determine free
space in a tablespace ?
• DBA_DATA_FILES or v$datafile view can be used to look at the size of a datafile .
• DBA_FREE_SPACE is used to determine free space in a tablespace.
• V$contolfile used to look at the size of a control file which includes maxlogfiles,
305
maxlogmembers, maxinstances.
• Select * from v$controlfile gives the size of a controlfile.
• From sqlplus, query show parameter block_size to get size of db_block_size
29) What is archive log file?
• In archive log mode, the database will makes archive of all redo log files that are filled, called
as archived redo logs or archive log files.
• By default your database runs in NO ARCHIVE LOG mode, so we can’t perform online backup’s
(HOT backup).
• You must shut down database to perform clean backup (COLD backup) and recovery can be
done to the previous backup state.
• Archive log files are stored in a default location called FRA (Flash Recovery Area).
• We can also define our own backup location by setting log_archive_dest parameter.
30) Assume you work in an xyz company as senior DBA and on your absence your back up DBA
has corrupted all the control files while working with the ALTER DATABASE BACKUP
CONTROLFILE command. What do you do?
• As long as all data files are safe and on a successful completion of BACKUP control file
command by your Back up DBA you are in safe zone.
• We can restore the control file by performing following commands
1) CONNECT INTERNAL STARTUP MOUNT
2) TAKE ANY OFFLINE TABLESPACE (Read-only)
3) ALTER DATABASE DATAFILE (OFFLINE)
306
4) RECOVER DATABASE USING BACKUP CONTROL FILE
5) ALTER DATABASE OPEN RESETLOGS
6) BRING READ ONLY TABLE SPACE BACK ONLINE
• Shutdown and back up the system. Then restart.
• Then give the command ALTER DATABSE BACKUP CONTROL FILE TO TRACE
• This output can be used for control file recovery as well.
If control file backup is not available, then the following will be required
1) CONNECT INTERNAL STARTUP NOMOUNT
2) CREATE CONTROL FILE .....;
• But we need to know all of the datafiles, logfiles, and settings of MAXLOGFILES,
MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES for the database to use the command.
31) Can we reduce the space of TEMP datafile? How?
• Yes, we can reduce the space of the TEMP datafile.
• Prior to oracle 11g,,you had to recreate the datafile.
• In oracle 11g you reduce space of TEMP datafile by shrinking the TEMP tablespace.It is a new
307
feature to 11g.
• The dynamic performance view can be very useful in determining which table space to shrink.
32) What do you mean by database backup and which files must be backed up?
• Database stores most crucial data of business ,so it’s important to keep the data safe and this
can be achieved by backup.
• The following files must be backed up
• Database files (Header of datafiles is freezed during backup)
• Control files
• Archived log files
• Parameter files (spfile and pfile)
• Password file
33) What is a full backup and name some tools you use for full backup?
• A full backup is a backup of all the control files, data files, and parameter file (SPFILE or PFILE).
• You must also backup your ORACLE_HOME binaries which are used for cloning.
• A full backup can be performed when our database runs in NON ARCHIVE LOG mode.
• As a thumb rule, you must shutdown your database before you perform full backup.
• Full or COLD backup can be performed by using copy command in unix.
34) What are the different types of backup’s available and also explain the difference between
them?
• There are 2 types of backup’s
308
1) COLD backup(User managed & RMAN)
2) HOT backup(User managed & RMAN)
• Hot backup is taken when the database is still online and database should be in ARCHIVE LOG
MODE.
• Cold backup is taken when the database is in offline mode.
• Hot backup is inconsistent backup where as cold backup is consistent backup.
• You can begin backup by using the following command
SQL> alter database begin backup;
• End backup by
SQL> alter database end backup;
35) How to recover database if we lost the control file and we do not have a backup and what is
RMAN?
• We can recover our database at any point of time, when we have backup of our control files
in different mount points.
• Also check whether the control file is available in trace file located in USERDUMP or the alert
log to recover the database.
RMAN
• RMAN called as Recovery manager tool supplied by oracle that can be used to manage backup
and recovery activities.
• You can perform both offline (Cold) and online (Hot) backup’s using RMAN.
• We need to configure Flash_Recovery_Area of database to use RMAN.
• RMAN maintains the repository of backup information in control file.
309
6) Name the architectural components of RMAN
• RMAN executable
• Server process
• Channels
• Target database
• Recovery catalog database
• Media management layer
• Backup sets and backup pieces
37) What is a recovery catalog?
• Recovery catalog is an inventory of backup taken by RMAN for the database.
• The size of the recovery catalog schema depends upon the number of databases monitored
by the catalog.
• It is used to restore a physical backup, reconstruct it, and make it available to the oracle
server.
• RMAN can be used without recovery catalog.
• Recovery catalog also holds RMAN stored scripts.
38) List some advantages of using RMAN.
• Table spaces are not put in the backup mode ,therefore there is no extra redo log file during
310
online backups.
• Incremental backups that only copy data blocks, which have changed since last backup.
• Detection of corrupt blocks.
• Built in reporting and listing commands.
• Parallelization of I/O operations.
39) How to bring a database from ARCHIVE LOG mode to NON ARCHIVE LOG MODE?
• You should change your init<SID>.ora file with the following information
• log_archive_dest=’/u01/oradata/archlog’ (for example)
• log_archive_format=’%t_%s.dbf’
• log_archive_start=true (prior to 10g)
• sql>shutdown;
• sql> startup mount;
• sql> alter database archivelog;
• sql>alter database open;
• Make sure you backup your database before switching to ARCHIVELOG mode.
40) What are the different stages of database startup?
• Database undergoes different stages before making itself available to end users
• Following stages are involved in the startup of database
? NoMount
? Mount
? Open
311
• NoMount – Oracle Instance is available based on the parameters defined in SPFile.
• Mount - Based on the Information from parameter control files location in spfile, it opens and
reads them and available to next stage.
• Open - Datafiles, redo log files are available to the end users.
41) Name some of the important dynamic performance views used in Oracle.
• V$Parameter
• V$Database
• V$Instance
• V$Datafiles
• V$controlfiles
• V$logfiles
42) What are the different methods we can shutdown our database?
• SHUTDOWN (or) SHUTDOWN NORMAL
No new connections are accepted and wait for the user to close the session.
• SHUTDOWN TRANSACTIONAL
No new connections are accepted and wait for the existing transactions to commit and logouts
the session without the permission of user.
• SHUTDOWN IMMEDIATE
No new connections are accepted and all committed transactions are reflected in database and
all the transactions are about to commit are rolled back to previous value.
• SHUTDOWN ABORT
It’s just like an immediate power off for a database, it doesn’t mind what are the transactions
running it just stops entire activity -(even committed transactions are not reflected in database)
and make database unavailable. SMON process takes responsibility for recovery during next
312
startup of database.
• SHUTDOWN NORMAL, TRANSACTIONAL, IMMEDIATE are clean shutdown methods as
database maintains its consistency.
• SHUTDOWN ABORT leaves our database in an inconsistent state,data integrity is lost.
43) What are the different types of indexes available in Oracle?
Oracle provides several Indexing schemas
• B-tree index – Retrieves a small amount of information from a large table.
• Global and Local index – Relates to partitioned tables and indexes.
• Reverse Key Index - It Is most useful for oracle real application clusters applications.
• Domain Index – Refers to an application
• Hash cluster Index – Refers to the index that is defined specifically for a hash cluster.
44) What is the use of ALERT log file? Where can you find the ALERT log file?
• Alert log file is a log file that records database-wide events which is used for trouble shooting.
• We can find the Log file in BACKGROUND_DUMP_DEST parameter.
• Following events are recorded in ALERT log file:
• Database shutdown and startup information.
• All non-default parameters.
• Oracle internal (ORA-600) errors.
313
• Information about a modified control file.
• Log switch change.
45) What is a user process trace file?
• It is an optional file which is produced by user session.
• It is generated only if the value of SQL_TRACE parameter is set to true for a session.
• SQL_TRACE parameter can be set at database, instance, or session level.
• If it set at instance level, trace file will be created for all connected sessions.
• If it is set at session level, trace file will be generated only for specified session.
• The location of user process trace file is specified in the USER_DUMP_DEST parameter.
46) What are different types of locks?
There are different types of locks, which are given as follows:
• System locks – controlled by oracle and held for a very brief period of time.
• User locks – Created and managed using dbms_lock package.
• Different types of user locks are given as follows
• UL Lock – Defined with dbms_lock package.
• TX Lock – Acquired once for every transaction. It isa row transaction lock.
• TM Lock – Acquired once for each object, which is being changed. It is a DML lock. The ID1
column identifies the object being modified.
47) What do db_file_sequential_read and db_file_scattered_read events define?
• Db_file_sequential_read event generally indicates index usage.
• It shows an access by row id.
314
• While the db_file-scattered_read event indicates full table scan.
• Db_file_sequential_read event reads a single block at one time.
• Whereas db_file_scattered_read event reads multiple blocks.
48) What is a latch and explain its significance?
• Latch is an on/off switch in oracle that a process must access in order to perform certain type
of activities.
• They enforce serial access to the resources and limit the amount of time for which a single
process can use a resource.
• A latch is acquired for a very short amount of time to ensure that the resource is allocated.
• We may face performance issues which may be due to either of the two following reasons
• Lack of availability of resource.
• Poor application programming resulting in high number of requests for resource.
• Latch information is available in the v$LATCH and v$LATCHHOLDER dynamic performance
views.
49) Explain the architecture of data guard?
Data guard architecture includes the following components
• Primary database – Refers to the production database.
• Standby Database – Refers to a copy of primary or production database.It may have more
than one standby database.
• Log transport service – Manages transfer of archive log files primary to standby database.
• Network configuration – Refers to the network connection between primary and standby
database.
• Applies archived logs to the standby database.
• Role management services – Manages the role change between primary and standby
315
database.
• Data guard broker – Manages data guard creation process and monitors the dataguard.
50) What is role transition and when does it happen?
• Database operates in one of the following mutually exclusive roles
Primary
Standby
• Role transition is the change of role between primary and standby databases.
• Data guard enables you to change this roles dynamically by issuing the sql statements.
• Transition happens in between primary and standby databases in the following way
• Switchover, where primary database is switched to standby database and standby to primary
database.
• Failover, where a standby database can be used as a disaster recovery solution in case of a
failure in primary database.
• DRM allows you to create resource plans, which specify resource allocation to various
consumer groups.
• DRM offers an easy-to-use and flexible system by defining distinct independent components.
• Enables you to limit the length of time a user session can stay idle and automatically
terminates long-running SQL statement and users sessions.
• Sets the initial login priorities for various consumer groups.
• DRM will automatically queue all the subsequent requests until the currently running sessions
complete
316
******************************************************************************
******************************************************************************
**********
DATA DICTIONARY VIEWS
******************************************************************************
******************************************************************************
**********
View
Description
V$TABLESPACE
file(TS#,name)
Name and number of all tablespaces from the control
V$ENCRYPTED_TABLESPACES
tablespaces.
Name and encryption algorithm of all encrypted
DBA_TABLESPACES, USER_TABLESPACES
tablespaces.
Descriptions of all (or user accessible)
DBA_TABLESPACE_GROUPS
tablespaces that belong to them.
Displays the tablespace groups and the
DBA_SEGMENTS, USER_SEGMENTS
Information about segments within all (or user
accessible) tablespaces(segment_name,tablespace_name,Extents,bytes)
DBA_EXTENTS, USER_EXTENTS
Information about data extents within all (or user
accessible) tablespaces(segment_name,tablespace_name,bytes)
DBA_FREE_SPACE, USER_FREE_SPACE Information about free extents within all (or user
accessible) tablespaces(tablespace_name,bytes)
DBA_TEMP_FREE_SPACE
Displays the total allocated and free space in
317
each temporary tablespace.
V$DATAFILE
Information about all datafiles, including tablespace
number of owning tablespace(FILE#,name,bytes)
V$TEMPFILE
number of owning tablespace.
DBA_DATA_FILES
(file_name,tablespace_name,bytes)
DBA_TEMP_FILES
Information about all tempfiles, including tablespace
Shows files (datafiles) belonging to tablespaces
Shows files (tempfiles) belonging to temporary tablespaces.
V$TEMP_EXTENT_MAP
temporary tablespaces.
Information for all extents in all locally managed
V$TEMP_EXTENT_POOL
For locally managed temporary tablespaces: the
state of temporary space cached and used for by each instance.
V$TEMP_SPACE_HEADER
DBA_USERS
(username,default_tablespace)
DBA_TS_QUOTAS
Shows space used/free for each tempfile.
Default and temporary tablespaces for all users
Lists tablespace quotas for all users.
V$SORT_SEGMENT
Information about every sort segment in a
given instance. The view is only updated when the tablespace is of the TEMPORARY
type.
V$TEMPSEG_USAGE
temporary or permanent tablespaces
Describes temporary (sort) segment usage by user for
V$LOG
file(members-no. of members,status)
Displays the redo log file information from the control
V$LOGFILE
status(member-member name)
Identifies redo log groups and members and member
V$LOG_HISTORY
Contains log history information
Datafiles Data Dictionary Views:318
The following data dictionary views provide useful information about the datafiles of a
database:
View
Description
DBA_DATA_FILES
Provides descriptive information about each datafile, including
the tablespace to which it belongs and the file ID. The file ID can be used
to join with other views for detail information.
DBA_EXTENTS,USER_EXTENTS
DBA view describes the extents comprising all segments in the
database. Contains the file ID of the datafile containing the extent.
USER view describes extents of the segments belonging to objects owned by the
current user.
DBA_FREE_SPACE,USER_FREE_SPACE
DBA view lists the free extents in all tablespaces. Includes the
file ID of the datafile containing the extent. USER view lists the
free extents in the tablespaces accessible to the current user.
V$DATAFILE
Contains datafile information from the control file
V$DATAFILE_HEADER
information)
Contains information from datafile headers(checkpoint
319
ASM VIEWS:-
v$asm_alias
ASM instance
Displays every alias for all disk groups that are mounted by the
v$asm_diskgtoup
discovery time)
Displays every disk group that exists in the ASM instance (at
v$asm_file
ASM instance.
Displays all files within each disk group that is mounted by the
v$asm_operation
Similar to v$session_longops, in function, v$asm_operation shows
every file that is used in every long running operation executing within
the ASM instance.
v$asm_template
by the ASM instance.
Displays all templates within each disk group that is mounted
V$ASM_DISKGROUP :
performs disk discovery and lists diskgroups
V$ASM_DISKGROUP_STAT :
Lists the diskgroups
V$ASM_DISK :
discovery).
Contains disk details and usage statistiocs (e.g. ASM disk
320
V$ASM_DISK_STAT :
Contains disk details and usage statistiocs
V$ASM_CLIENT :
Lists all instances that are connected to ASM the ASM instance
V$OPERATION :
Contains the ASM file striping (re-balancing) operations
DATA PUMP VIEWS:-
View Name
Purpose/Description
DBA_DATAPUMP_JOBS Displays information on running Data Pump jobs. Also comes in the
USER_DATAPUMP_JOBS variety(owner_name,job_name,operations)
DBA_DATAPUMP_SESSIONS
Provides session-level information on Data Pump jobs.
DATAPUMP_PATHS
Provides a list of valid object types that you can associate with
the include or exclude parameters of expdp or impdp.
DBLINK VIEWS:-
View
Purpose
V$DBLINK
Lists all open database links in your session, that is, all database links with the
321
IN_TRANSACTION column set to YES.
DBA_DB_LINKS Lists all database links in the database.
ALL_DB_LINKS Lists all database links accessible to the connected user.
USER_DB_LINKS
Lists all database links owned by the connected user.
PROFILE VIEWS:-
DBA_TS_QUOTAS,USER_TS_QUOTAS
Describes tablespace quotas for users
USER_PASSWORD_LIMITS
assigned to the user
Describes the password profile parameters that are
USER_RESOURCE_LIMITS
Displays the resource limits for the current user
DBA_PROFILES
RESOURCE_COST
Displays all profiles and their limits
Lists the cost for each resource
V$SESSION
name
Lists session information for each current session, includes user
V$SESSTAT
Lists user session statistics
V$STATNAME
V$SESSTAT view
Displays decoded statistic names for the statistics shown in the
PROXY_USERS
Describes users who can assume the identity of other users
http://docs.oracle.com/cd/B19306_01/network.102/b14266/admusers.htm
322