Uploaded by sherif.elkoumy

Periodic Jobs and Tasks in SAP BW

advertisement
Periodic Jobs and Tasks in SAP BW
Best Practice for Solution Management
Version Date: May 2006
The newest version of this Best Practice can always be obtained through
the SAP Solution Manager or the SAP Service Marketplace.
Contents
Applicability, Goals, and Requirements ....................................................................................................2
Best Practice Procedure and Verification .................................................................................................3
Preliminary Tasks ...............................................................................................................................3
Procedure ...........................................................................................................................................3
General Tasks ..............................................................................................................................3
Monitoring - Basis.........................................................................................................................5
Monitoring - Application................................................................................................................6
Performance.................................................................................................................................7
Data Management......................................................................................................................12
Data Consistency .......................................................................................................................20
Further Information .................................................................................................................................21
Best Practice: Periodic Jobs and Tasks in SAP BW
2
Applicability, Goals, and Requirements
To ensure that this Best Practice is the one you need, consider the following goals and requirements.
Goal of Using this Service
This service provides recommendations about jobs and tasks that have to be performed on a regular
basis to ensure system performance and stability in a BW system.
Alternative Practices
You can get SAP experts to deliver this Best Practice if you order the Solution Management
Optimization (SMO) service known as System Administration Workshop. For further details concerning
this service, please refer to SAP Service Marketplace http://service.sap.com/supportservices Î SAP
System Administration.
Staff and Skills Requirements
For optimal benefit you should make sure that you have experience in the following areas:
•
Knowledge of standard SAP application and database monitors
•
Practical experience with the SAP BW Administrator Workbench
•
Experience with monitoring of data uploads
•
Experience with monitoring of SAP BW queries
•
Experience in job scheduling in a SAP environment
System Requirements
This document refers to BW 3.x and Netweaver 2004s (BI) systems. In order for it to be of relevance
and benefit to all BW administrators, it will not cover features or restrictions that belong to any
particular database or operating system.
Duration and Timing
Periodic jobs and tasks have to be performed in a BW system on an ongoing basis. The time required
for the periodic jobs and tasks depends therefore on the objects being used and the data volume
being loaded.
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
3
Best Practice Procedure and Verification
Preliminary Tasks
It is recommended that this Best Practice be first read completely before any given recommendations
are implemented. We strongly advise against direct changes in the production environment, and
recommend first testing any changes in a test system.
Procedure
General Tasks
EarlyWatch Alert
Frequency: Weekly
The free SAP EarlyWatch Alert service can be used to facilitate BW maintenance. It offers a unique
application and basis perspective on the system. Utilize the SAP EarlyWatch alert as part of the
system monitoring strategy. It is also a good information source and checklist which can be used in
meetings between the application and basis teams.
Support Package Strategy
Frequency: Quarterly
To keep the system up-to-date, SAP recommends implementing Support Packages and / or Patches
into the system landscape on a regular basis. This should prevent already known and fixed bugs
affecting your business and you can make use of product improvements.
To guarantee an optimal level of support from the SAP side, the system has to have an up-to-date
status.
Corrections for BW (front-end, server, Plug-in or Add-on) are only made available in the before
mentioned Support Packages. With the exception of individual cases, no description of the correction
(table entries, coding) is given in the notes. In general SAP does not carry out any corrections directly
in customer systems.
Therefore, a periodical maintenance window has to be planned by the application team in cooperation
with the IT department to implement and test these Support Packages or Patches in the Customer
specific Business Scenario (at least each quarter of a year). It is recommended to apply Support
Package Stacks, which are usually delivered quarterly (see http://service.sap.com/sp-stacks).
Corrections are only brought out in notes, if problems are caused by applying SAP Support Packages
or Patches. In such cases as these, the matter is referred to in the SAPBWNEWS. SAPBWNEWS is
brought up-to-date with every BW Support Package stored as notes in SAP Service Marketplace. For
more detailed information please check the current information in the SAP Service Marketplace:
http://service.sap.com/bi Î Services & Implementation Î News SAP BW Support
Backup Strategy
Frequency: Daily
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
4
A backup and restore concept needs to be created. This concept should define how to backup and
restore each individual system component.
A B&R concept must also take into account possible procedures in case of a point-in-time recovery of
one of the system components. A procedure must be defined and tested to handle cases of where
specific application data is lost in one of the systems. A vital factor for the operability of a restore in
case of an emergency is a thorough testing of procedures and training of administrators on restore
and recovery scenarios.
For details, please refer to the Best Practice document "Backup and Restore for mySAP.com" that is
available either through the SAP Solution Manager in the SAP Service Marketplace
http://service.sap.com/atg
This document lists, among many other contents, suggestions and recommendations on:
•
Backup and recovery procedures for all individual components of your system landscape
•
Procedures for performing a landscape-wide backup and recovery
•
Dealing with point-in-time restores
The document also contains an example of how to set up a viable Backup and Recovery procedure in
a mySAP.com landscape.
Transport Window
Frequency: Weekly
Transports and SAP Notes should be applied to the productive BW system in a defined time window
when no data uploads are taking place.
There should be clearly defined guidelines for the conditions which require the productive system to
be opened for emergency corrections - this should only be done on an exceptional basis. In general all
changes (except the explicitly allowed ones in the BW transport system) should be made in the
development system and transported via the quality assurance and test systems to the productive
environment. Well defined and documented test scenarios should be used, and if necessary adapted
to ensure the quality of all transports released to the productive environment.
When transports, notes or support packages are applied in SAP R/3, this may also have an effect on
the extraction structures or extraction process for SAP BI. Fields and/or programs can be changed or
simply deactivated. It can be the case that some BI extractions do not work after these changes or that
deltas may be lost. The BI team may also lose valuable time investigating these issues due to the fact
that they were not aware of these standard program modifications.
There should be procedure in place to inform people from the BI team before notes are applied in SAP
source systems that may have an influence on extraction or updating. The BI team has to have the
possibility to take necessary actions (e.g. load delta data). As this dependency may not always be
mentioned in the note itself, awareness of these interactions should be created in the SAP source
system team. There should be a special awareness of notes in components
•
BW-BCT* (affecting extraction logic)
•
BC-BW (affecting delta queue and/or Service-API)
•
All components where data is extracted from SAP systems.
These notes should be investigated very carefully in order to verify if there are changes in structures
or updating or extraction logic. It is very important to note that prior to support package and Plug In
upgrades in the source system, it is necessary to empty the delta queues for all DataSources.
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
5
Monitoring - Basis
Basis Jobs
Frequency: Hourly to monthly, depending on the specific job.
There are a number of jobs that have to run periodically in a live BW installation, for example, to delete
outdated jobs. You can easily schedule these jobs as follows: Transaction SM36, press button
'Standard jobs'.
Please refer to SAP Note 16083 for details.
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
6
Basis Monitoring
Frequency: Daily
The normal basis monitoring also has to be performed on a BW system, including (among others) the
following logs and monitors:
•
System Log (transaction SM21)
•
ABAB Runtime Errors (transaction ST22)
•
Operation System Monitor (transaction ST06)
•
Tune Summary (transaction ST02)
Monitoring - Application
Monitor Data Staging
Frequency: Daily
Please evaluate the monitoring strategy for the production BW system. If there are reports or data
which need to be available at a certain agreed time, then the upload process for this data needs to be
analyzed and it should be decided if it is necessary to monitor the jobs during the night or at the
weekend.
Uploads and reports should be assigned a business priority status and then a monitoring strategy
appropriate to the priority can be defined. E.g., high priority reports could be the daily Logistics report.
A priority period could be the period end closing.
When the monitoring is performed in the morning one hour before the query users start working, this
strategy is cost effective and often sufficient for many uploads but may cause severe delays in data
and report availability if priority uploads need to be restarted. In the case of failed extractions it can
also mean that when the extraction processes is restarted in the morning it causes an extra load on
the source system during online working hours.
A successful BW monitoring strategy requires close co-operation between both application and basis
teams. In BW, in comparison to R/3, these two areas of expertise are more closely linked and the
teams are interdependent. As BW uploads involve extractions in other systems, it is also necessary to
ensure that communication and monitoring is as seamless as possible between the BW and source
system teams.
If the monitoring will be performed by Basis individuals, then further training in BW will be necessary or
vice-versa. Close co-operation could mean assigning priority to certain BW jobs and documentation
which details the correct course of action and contact people if the job were to fail.
Automation methods include adding notification processes (available as standard) to critical Process
Chains.
The following monitors and tools are important for Data Staging:
•
Process Chain Overview: For the application-side analysis of the processes executed via
process chain you can use the Process Chain Overview (RSPC). Here you have an overview
about the progress of your process chains and which processes are taking place at the
moment. You see an overview about the process steps done so far for each process. From
here, you can jump to more detailed monitors of single processes (Open Hub Monitor, Upload
Monitor) or to the job log of the processes.
•
Open Hub Monitor: For problems with Open Hub Service and InfoSpokes you can use the
Open Hub Monitor. Similar to the Upload Monitor the Open Hub Monitor shows the progress of
each request and the single process steps.
•
Upload Monitor: To have an overview about all upload processes running on your BW
system, you should use the Upload Monitor (RSMO). In the detail-monitor of each request
(upload process) the progress of this upload and the single process steps are shown.
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
•
7
Changerun Monitor: For problems with changeruns you can use the monitor for changeruns
(transaction CHANGERUNMONI).
Monitor Query Performance
Frequency: Weekly
To improve end-user satisfaction, SLRs (service level requirements) should be defined for all new
queries and a pro-active query performance monitoring should be set up. It should not be the standard
procedure that long running queries are only identified by the end users in the functional department
and reported to the technical department.
Classify and prioritize the queries. Based on the outcome of this categorization, ensure that
appropriate monitoring and tuning procedures are in place to guarantee the maximum performance
and availability of these queries. This should include:
•
Having a minimal system load when the query runs or when the associated data is uploaded.
•
Having the responsible people on call during a critical load and ensuring that the correct
procedures are in place to contact that person.
SLRs should be defined by the functional department for all new queries. The technical department
should check if it is possible to meet these requirements by testing the queries in the development
system, therefore it is necessary to have a comparable data volume in the development and
production systems respectively. If aggregates are not suitable or do not improve the performance of
these queries to the level required, consider if queries could be tuned by pre-filling the OLAP cache
with the Reporting Agent. Note that usually only a restricted number of queries can be pre-calculated,
therefore a prioritization is necessary and the performance of these important queries should be
monitored.
As the amount of data being read by the query and the reporting behaviour of the end-users may
change over the time, it is important to also monitor the query performance regularly, e. g. weekly or
when a high amount of data is loaded to the InfoProvider. In NetWeaver 2004s (BI) you can use the
AdminCockpit to monitor query performance; in BW 3.5 you can use the Technical Content and/or
transaction ST03 (The workload monitor).
Performance
Compression
Frequency: Daily
InfoCubes should be compressed regularly. Uncompressed cubes increase data volume and have
negative effect on query and aggregate build performance. If too many uncompressed requests are
allowed to build up in an InfoCube, this can eventually cause unpredictable and severe performance
problems. Basically the F-fact table is optimized for writing (upload) and the E-fact table is optimized
for reading (queries).
A well run and high performing BW system is not possible unless there is regular compression of
InfoCubes and aggregates. A regular compression strategy should be defined with the business data
owners. In line with the requirements of the business process, data in the F fact table should be
compressed regularly. For more information refer to the documentation in the SAP Help Portal:
http://help.sap.com Î SAP NetWeaver Î Business Information Warehouse Î Administrator
Workbench Î Managing Data Targets Î Managing InfoCubes Î InfoCube Compression
Technical description
During the upload of data, a full request will always be inserted into the F-fact table. Each request gets
its own request ID and partition (DB dependent), which is contained in the 'package' dimension. This
feature enables you, for example, to delete a request from the F-fact table after the upload. However,
this may result in several entries in the fact table with the same values for all characteristics except the
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
8
request ID. This will increase the size of the fact table and number of partitions (DB dependent)
unnecessarily and consequently decrease the performance of your queries. During compression,
these records are summarized to one entry with the request ID '0'. Once the data has been
compressed, some functions are no longer available for this data (for example, it is not possible to
delete the data for a specific request ID).
Transactional InfoCubes in a BPS environment
You should compress your InfoCubes regularly, especially the transactional InfoCubes. In a BPS
planning process, a request is closed when the open request contains 50.000 records or when it is
switched manually between loading and planning mode.
For transactional InfoCubes that are used for BPS there is another advantage through compression
(on ORACLE databases): The F-fact table has a B-tree index and is partitioned according to request
ID, whereas the E-fact table has a bitmap index and partitioning according to your settings (time
characteristics). Read accesses to the E-fact table are faster than those to the F-fact table because Btree indices are favorable for data write processes but not for read processes. Please check note
217397 for details.
The BPS delete function is used to remove data from the selected planning package. The records are
not directly deleted from the InfoCube. Instead, the system creates additional records with offsetting
values. The original and offsetting record are deleted when the InfoCube is compressed.
Calculate Statistics
Frequency: Daily
For database performance reasons, it is essential to ensure that the database optimizer statistics are
collected at the optimum time and in the recommended way. Collect the database optimizer statistics
after changes in the data. In a BW system, this means that optimizer statistics should be scheduled
after the data loading.
Typical Data Load Cycle
Load into PSA
Load into ODS
Data Load
Monitor
Activate
Data in ODS
Start
Drop Indices
Roll up
Aggregates
Statistics
Build DB
Statistics
Data Target
Maintenance
Load into Cube
Build Indices
© SAP AG 2005, BW System Administration / 3
It can be the case that there is not enough time to run a system wide statistics collection after the data
loading has been completed. This might be due to the fact that the users start running queries directly
after the data loads have been completed or that certain loads take much longer to complete than
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
9
others. If this is the case then consider integrating a statistics collection method into the relevant
Process Chain(s). Statistics collection comes as a standard process in the Process Chain
maintenance.
Index maintenance and monitoring
Frequency: Weekly
The type of indexes and the administration possibilities can be quite different in SAP BW compared to
R/3.
Assign responsibility for the creation, maintenance and monitoring of indexes. Responsible team
members should have a good understanding of the performance values of indexes, but also the
maintenance costs. They should also have a good awareness of the data flows and should regularly
check for missing and degenerated indexes.
Tools and logs which can be used to detect degenerated indexes include:
•
Transaction RSRV
•
The Administrator workbench Î Right click on the InfoCube Î Manage Î Performance tab
•
Transaction DB14 Î Log for statistics collection
•
Transaction DB02 Î Missing indexes
Further information on index fragmentation is available in SAP Note 771929. A detailed description of
index usage in BW is contained in the SAP Education course BW360.
Report SAP_INFOCUBE_DESIGNS
Frequency: Monthly
Running report SAP_INFOCUBE_DESIGNS shows the database tables of an InfoCube, the number of
records in these tables and the ratio of the dimension table size to the Fact table size. If dimension
tables are too large then they can cause badly performing table joins on database level. As the data
volume grows and the data allocation changes over the time, this check should be executed regularly.
When loading transaction data, IDs have to be generated in the dimension table for the entries. If you
do have a large dimension, this number range operation can negatively affect performance.
In the InfoCube star schema, a dimension table can be omitted if the InfoObject is defined as a line
item. This means that SQL-based queries become easier. In many cases, the database optimizer can
select better run schedules.
However, this also has one disadvantage: You cannot include additional characteristics in a dimension
that is marked as a line item at a later date. This is only possible with normal dimensions. If a
dimension table has more than one characteristic with a high granularity then consider placing these
characteristics into separate dimension tables.
Example:
A line item is an InfoObject, for example, an order number, for each of whose features one or a few
facts in the fact table of the InfoCube are listed.
Guidelines how to limit the number of records in dimension tables
1. If an InfoObject has almost as many distinct values as there are entries in the fact table, define the
dimension of the InfoObject as a line item dimension. If defined in this manner, the system will write
the data directly to the fact table (a field with the data element RSSID, which immediately shows the
SID table of the InfoObject, is written in the fact table) instead of creating a dimension table that has
almost as many entries as the fact table.
2. Only group related characteristics into one dimension. Unrelated characteristics can use too much
disk space and cause performance problems (for example, 10,000 customers and 10,000 materials
may result in 100,000,000 records).
3. Avoid characteristics with a high granularity, that is, many distinct entries compared with the number
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
10
of entries in the fact table.
4. If you cannot avoid characteristics with a high granularity and most of your queries do not use these
characteristics, create an aggregate that stores summarized information. Do not use characteristics
with a high granularity in this aggregate.
Please note that the LineItem flag can have negative performance impact on F4 help usage (for which
the setting 'Only Values in InfoProvider' is used (transaction RSD1 Î Tab 'Business Explorer')).
5. It is also worthwhile to try the checks in transaction RSRV. Use for example RSRV Î All
Elementary Tests Î Transaction Data Î Entries Not Used in the Dimension of an InfoCube.
Implementation
Regarding Line Item Dimensions: When creating the dimensions as part of InfoCube maintenance,
flag the relevant dimension as a line item. You can assign this dimension to exactly one InfoObject. To
check which InfoObject has the highest cardinality in the Dimension, you can look at the fields of the
Dimension table highlighted in report SAP_INFOCUBE_DESIGNS. E.g. Transaction DB02 => Detailed
Analysis (Tables and Indexes section) => Enter Dimension table name => Check for the field with the
most distinct objects.
The table below shows a list of InfoCubes with large dimension tables:
DIMENSION TABLE
NO OF ROWS IN
DIMENSION
% ENTRIES IN DIMS COMPARED TO
F-TABLE
YCCA_C11
/BIC/DYCCA_C114
1.796.665
32
YCS_C01
/BIC/DYCS_C011
855.039
53
INFOCUBE
Prefilling Cache
Frequency: Daily
By using the OLAP Cache, the load on the database server is reduced. Cached queries, especially
those that are cached in the main memory and in flat files, do not cause a database access or at least
the database accesses are reduced. Because of this reduction, there are fewer displacements in the
database buffer and therefore the cache has a positive impact on the overall database performance
and the network load. With query execution, the runtime for reading the data is reduced by the cache
especially for queries reading a huge data volume.
To meet the business requirements in reporting performance, pre-calculation of some important
queries could be necessary. This means you can "warm up" the OLAP cache for these queries directly
instead of it being filled "indirectly" by the reporting in question. Then the result can be read out of the
OLAP Cache and this is much faster than from the database.
This function is not explicitly provided with BW3.x. However, you can use the method that is described
in "How to ...Performance Tuning with the OLAP Cache" with certain restrictions.
In BI 7.0, you can use the BEx Broadcaster to fill the OLAP Cache. Call the broadcaster from the
Query Designer:
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
11
Choose “Create New Setting” Î Choose “Fill OLAP Cache”. Select the filter navigation. When the
query uses variables, also a variant has to be maintained; otherwise the variable could not be filled in
the background job.
Save and schedule the job: You can either schedule the pre-filling of the OLAP Cache to run at
predefined times or with each data change in the InfoProvider.
When the pre-filling of the OLAP Cache is scheduled to run with each data change in the InfoProvider,
an event has to be raised in the process chain which loads the data to this InfoProvider. When the
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
12
process chain executes the process “Trigger Event Data Change (for Broadcaster)”, an event is raised
to inform the Broadcaster that the query can be filled in the OLAP cache.
Create Aggregates
Frequency: When the service level requirements of the queries are not met, when you
encounter increasing I/O or bad data hit ratio on the database server.
If the query runtime is mainly spent on the database, suitable aggregates should be created. The ratio
between rows selected and rows transferred indicates potential performance improvements with
aggregates. It should be investigated which queries are performing badly; and then, which of them can
be tuned by aggregates.
Define responsibilities for aggregate creation in order to ensure that the queries are performing well.
Further information is available on the SAP Service Marketplace:
http://service.sap.com/bi Î Performance Î Aggregates
We also recommend ordering the TEWA50 “Empowering Workshop: SAP BW - Query Tuning with
Aggregates. “ This workshop contains information about:
•
Concept of aggregates
•
Find queries that can be tuned with aggregates
•
Create suitable aggregates
•
Maintain aggregates efficiently
Delete Aggregates
Frequency: Quarterly
There are several reasons, why an aggregate might be unnecessary:
•
There can be very similar aggregates that might be combined into one new aggregate.
•
There can be aggregates which are never used and which are no basis aggregates.
•
There can be aggregates with an insufficient reduction factor compared to the InfoCube.
As aggregates contain redundant data and are especially created for performance reasons,
unnecessary aggregates need disk space and have to be regularly maintained via roll-up and/or
change run.
Check regularly the usage of your aggregates. It can be the case that the merging of similar
aggregates is possible. Some aggregates may not be used at all, or some aggregates could be similar
in size to the InfoCubes and lead to no performance improvement.
The EarlyWatch Report can also be checked for aggregates to be deleted.
Data Management
Archive Data
Frequency: Depending on the retention period of your data.
Without archiving, unused data is stored in the database and ODS-Objects and InfoCubes can grow
unrestricted. This can lead to a deterioration of general performance.
Establish a BW data archiving project to identify InfoCube and ODS-Objects that can be archived as
well as to plan and estimate data storage to determine data storage requirements and the cost
involved. The benefits of BW archiving include:
•
Reduction of online disk storage
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
•
Improvement in BW query performance
•
Increased data availability as rollup, change runs and backup times will be shorter
•
Reduced hardware consumption during loading and queries
13
An archiving strategy should include the following points and tasks:
•
Periodical archiving of objects
•
Estimation of required data storage
•
Identification of InfoCube and ODS-Objects that can be archived
•
Mapping of Data targets to archiving objects
•
Validation of archived data
•
Read capability from archived data
•
Retention period of archived and deleted data
•
Time to adjust or rebuild aggregates
•
Timings for locking the data target whilst deletion is taking place.
For more information refer to the SAP Notes 643541 and 653393. There is a SAP service for data
management and archiving that can be ordered in the SAP service marketplace.
http://service.sap.com/servicecatalog
There is information available on the SAP Help Portal:
http://help.sap.com Î SAP NetWeaver Î SAP Business Information Warehouse Î Release Î
System Administration Tasks Î Data Archiving
There is also information available on the SAP Service Marketplace:
http://service.sap.com/bi Î BI InfoIndex Î Archiving
Delete Persistant Staging Area (PSA) data
Frequency: Weekly
Determine a defined data retention period for data in the PSA tables. This will depend on the type of
data involved and data uploading strategy. As part of this policy, establish a safe method to periodically
delete the data. If the PSA data is not deleted on a regular basis, the PSA tables grow unrestricted.
Very large tables increase the cost of data storage, the downtime for maintenance tasks and the
performance of data upload.
From BW 3.X, request deletion is integrated in Process Chains. For further information on how to
delete requests from the PSA, refer to the SAP Help Portal documentation:
http://help.sap.com Î SAP NetWeaver Î SAP Business Information Warehouse Î Release Î
Administrator Workbench Î Administration Î Monitoring Î Persistent Staging Area Î Deleting
Requests from the PSA
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
Delete Changelog data
Frequency: Weekly
For change logs which are also PSA tables, the deletion can be done from ODS-Objects / DataStore
Objects Î Manage Î Environment Î Delete change log data:
© 2004 SAP AG
14
Best Practice: Periodic Jobs and Tasks in SAP BW
15
Please note that only already updated change log requests can be deleted and that after the deletion a
reconstruction of requests for subsequent data targets using the ODS / DataStore change log will not
be possible.
Delete DTP Temporary Storage
Frequency: Weekly
This task is only relevant for BI 7.0. The DataTransfer Process (DTP) can be set up from the
temporary storage in case of a problem. You can view and verify the data in the temporary storage in
case of problems.
The deletion of temporary storage can be set from DTP Maintenance Î Goto Î Settings for DTP
Temporary Storage Î Delete Temporary Storage:
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
16
Here you can choose for each DTP:
•
For which steps you want to have a temporary storage
•
The level of detail for the temporary storage
•
The retention time of the temporary storage.
Archive / Delete Administration Tables
Frequency: Weekly
SAP Note 706478 (and the referenced sub notes) provides an overview of administrative basis tables
that may considerably increase in size and cause problems as a result, if the entries are not regularly
archived or deleted. Growing administration tables increase the total size of the system and negatively
impact performance, for example during monitoring the requests.
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
17
Affected tables are (among others):
•
Application Log Tables (BAL*)
•
IDoc Tables (EDI*)
•
BW Monitoring Tables for requests and process chains (Archiving only possible with BI 7.0)
•
Job Tables (TBTCO, TBTCP)
Archive or delete entries in these tables as described in the note.
Delete BW Statistic Tables
Frequency: Quarterly
The BW statistic tables RSDDSTAT* and the BPS statistic tables UPC_STATISTIC* have to be deleted
regularly.
Please follow the notes 211940,195157,179046,366869 before deleting or archiving.
For the BW statistic tables RSDDSTAT*, the deletion of records older than x days can be done in
transaction RSA1. To delete data, call transaction RSA1, choose 'Tools' Î 'BW statistics for
InfoProvider' and select 'Delete data':
The time period for data, which should be deleted, can now be entered. Please read note 309955 for
information on usage and errors in the BW Statistics.
For the BPS statistic tables UPC_STATISTIC*, the deletion of records older than x days can be done
in transaction BPS_STAT0:
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
18
Delete tRFC Queues
Frequency: Weekly
In BW and all connected source systems, check all outbound tRFC queues (transaction SM58) from
time to time to see whether they contain old unsent data packages or Info-IDocs. The reason for these
leftover entries could be that they have already been processed in BW, and therefore cannot be
processed again, or they contain old requests that cannot be sent following the system copy or RFC
connection change.
You should then delete these tRFC requests from the queue, not only to reduce the data volume in
your system but primarily to prevent from accidentally sending these old entries again to BW data
targets:
In the RFC destination (transaction SM59) from the source system to BW, the connection should be
set up to prevent a terminated transfer from being restarted automatically and, most importantly, to
prevent periodic automatic activation of incorrectly sent data:
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
19
Delete unused Queries / Workbooks
Frequency: Quarterly
Unused/unnecessary Queries can lead to higher times in the 'Open' Î 'Query' dialog. They also
cause an unnecessary allocation of disk space.
Check regularly whether queries are really still in use. The 'Last Used' field in transaction RSZDELETE
(Delete Objects) is related to table RSZCOMPDIR. 'Last Used' is updated in table RSZCOMPDIR with
the exact timestamp of when queries are executed with the BEx Analyzer or Web Frontend.
Delete Temporary Query Views
Frequency: Monthly
The BW System uses different temporary objects for query execution, compression and other
processes. The system often generates them outside of the ABAP DDIC for performance reasons or
because they cannot be defined there (as stored procedures or triggers). In general these temporary
objects are deleted when the processing of the query is finished, but in case of an abortion of a query
residual entries can persist.
Run report SAP_DROP_TMPTABLES and function RSDDTMPTAB_CLEANUP (with parameter
I_NAMETYPE = 03) to remove the temporary query views (“/BI0/03xxx”). Make sure that no query
execution, compression, aggregate roll-up or data extraction runs while the report
SAP_DROP_TMPTABLES is executed, otherwise this can result in terminations of the processes.
For further details, please see SAP Note 449891.
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
20
Data Consistency
Schedule RSRV Checks
Frequency: Weekly, important checks daily.
Use transaction RSRV to analyze the important BW Objects in the system. This transaction also
provides repair functionality for some tests. These tests only check the intrinsic technical consistency
of BW Objects such as foreign key relations of the underlying tables, missing indexes etc. They do not
analyze any business logic or semantic correctness of data.
Missing table indexes or inconsistencies in master or transactional data can have a negative impact on
your system performance or lead to missing information in your BW reporting.
You can schedule the tests in RSRV to be run regularly in the background by defining a specific test
package for your core business process and needs. Weekly checking (e.g. on the weekend) should be
adequate in general, but important checks (e.g. missing table indexes) could be also performed on a
more frequent basis (e.g. daily).
Another option is to start these checks based on an event (e.g. tests are triggered after data loading).
If you do so, make sure that the application log is checked regularly for the results and recommended
necessary corrections are made in time.
For further information please read note 619760 containing the latest news about RSRV checks:
A detailed description of the test Information as to whether repair functionality is available in the check.
When the results of the checks are displayed in the application log, you can double click on the
message. You will then see the message again on the right hand side with additional buttons for long
texts and details (scroll to the right side) if applicable.
For more general information on data consistency within your BW project refer also to the SAP Service
Marketplace: http://service.sap.com/bi Î Data Consistency
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
21
Further Information
Overview about jobs and tasks
The table below lists all the jobs and tasks discussed in this Best Practice.
Area
Job / Task
Frequency
General Tasks
EarlyWatch Alert
Weekly
Support Package Strategy
Quarterly
Backup Strategy
Daily
Transport Window
Weekly
Basis Jobs
Hourly to monthly, depending
on the specific job.
Basis Monitoring
Daily
Monitor Data Staging
Daily
Monitor Query Performance
Weekly
Compression
Daily
Calculate Statistics
Daily
Index maintenance and monitoring
Weekly
Report SAP_INFOCUBE_DESIGNS
Monthly
Prefilling Cache
Daily
Create Aggregates
When the service level
requirements of the queries
are not met, when you
encounter increasing I/O or
bad data hit ratio on the
database server.
Delete Aggregates
Quarterly
Archive Data
Depending on the retention
period of your data.
Delete PSA data
Weekly
Delete Changelog data
Weekly
Delete DTP Temporary Storage
Weekly
Archive / Delete Administration Tables
Weekly
Delete BW Statistic Tables
Quarterly
Delete tRFC Queues
Weekly
Monitoring - Basis
Monitoring - Application
Performance
Data Management
© 2004 SAP AG
Best Practice: Periodic Jobs and Tasks in SAP BW
Data Consistency
22
Delete unused Queries / Workbooks
Quarterly
Delete Temporary Query Views
Monthly
Schedule RSRV Checks
Weekly, important checks
daily.
Feedback and Questions
Send any feedback by formulating an SAP customer message to component SV-SMG at
http://service.sap.com/message in the SAP Service Marketplace.
© Copyright 2004 SAP AG. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission
of SAP AG. The information contained herein may be changed without prior notice.
Some software products marketed by SAP AG and its distributors contain proprietary software components of
other software vendors.
®
®
®
®
®
®
®
Microsoft , WINDOWS , NT , EXCEL , Word , PowerPoint and SQL Server are registered trademarks of
Microsoft Corporation.
®
®
®
®
®
®
®
®
®
®
®
IBM , DB2 , OS/2 , DB2/6000 , Parallel Sysplex , MVS/ESA , RS/6000 , AIX , S/390 , AS/400 , OS/390 , and
®
OS/400 are registered trademarks of IBM Corporation.
ORACLE® is a registered trademark of ORACLE Corporation.
TM
®
®
INFORMIX -OnLine for SAP and Informix Dynamic Server
Incorporated.
are registered trademarks of Informix Software
UNIX®, X/Open®, OSF/1®, and Motif® are registered trademarks of the Open Group.
HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World Wide Web Consortium,
Massachusetts Institute of Technology.
®
®
JAVA is a registered trademark of Sun Microsystems, Inc. JAVASCRIPT is a registered trademark of Sun
Microsystems, Inc., used under license for technology invented and implemented by Netscape.
SAP, SAP Logo, R/2, RIVA, R/3, ABAP, SAP ArchiveLink, SAP Business Workflow, WebFlow, SAP EarlyWatch,
BAPI, SAPPHIRE, Management Cockpit, mySAP.com Logo and mySAP.com are trademarks or registered
trademarks of SAP AG in Germany and in several other countries all over the world. All other products mentioned
are trademarks or registered trademarks of their respective companies.
Disclaimer: SAP AG assumes no responsibility for errors or omissions in these materials. These materials are
provided “as is” without a warranty of any kind, either express or implied, including but not limited to, the implied
warranties of merchantability, fitness for a particular purpose, or non-infringement.
SAP shall not be liable for damages of any kind including without limitation direct, special, indirect, or
consequential damages that may result from the use of these materials. SAP does not warrant the accuracy or
completeness of the information, text, graphics, links or other items contained within these materials. SAP has no
control over the information that you may access through the use of hot links contained in these materials and
does not endorse your use of third party Web pages nor provide any warranty whatsoever relating to third party
Web pages.
© 2004 SAP AG
Download