Uploaded by M Ihsan

PERFORMANCE TUNING FOR PEOPLESOFT APPLICATIONS

advertisement
PERFORMANCE TUNING FOR PEOPLESOFT APPLICATIONS
1.Introduction:
It is a widely known fact that 80% of performance problems are a direct result of the to poor
performance, such as server configuration, resource contention. Assuming you have tuned your
servers and followed the guidelines for your database server, application server, and web
server, most of your performance problems can be addressed by tuning the PeopleSoft
Application.
This paper presents methodologies and techniques for optimizing the performance of PeopleSoft
applications. The methodologies that are discussed are intended to provide useful tips that will
help to better tune your PeopleSoft applications. These tips focus on tuning several different
aspects within a PeopleSoft environment ranging from servers to indexes. You will find some of
these tips provide you with a significant improvement in performance while others may not apply
to your environment.
2. SERVER PERFORMANCE
In general, the approach to application tuning starts by examining the consumption of
resources. The entire system needs to be monitored to analyze resource consumption on
an individual component basis and as a whole.
The key to tuning servers in a PeopleSoft environment is to implement a methodology to
accurately capture as much information as possible without utilizing critical resources
needed to serve the end-users.
Traditional tools used to measure utilizations impact the system being measured and
ultimately the end-user experience. Commands like the following provide snapshot data but
not without an associated cost. These tools can consume a significant amount of resources
so care should be taken when you execute them.
a)
b)
c)
d)
f)
g)
h)
df size
iostat swapinfo
ipcs timex
netstat top
ps uptime
sar vmstat
swapinfo also glance & gpm
The goal of using these native commands is to identify, if and where, a bottleneck is in the server.
Is the problem in the CPU, I/O or memory? These native tools provide indicators, but at the same
time could skew the results because of the overhead associated with them. Typically, additional
third party tools are needed to complete the analysis.
The last hurdle companies are facing in tuning the server is making timing decisions on when to
upgrade the hardware itself. To do this, much more information needs to be collected and stored
in order to understand if an historical spike in resource utilization was a one-time aberration or a
regular occurrence building over time. The recommendation is to look at third party vendors for
solutions that can collect key performance indicators while minimizing overhead on the system.
The collected data can then be put in a repository for detailed historical analysis.
3. WEB SERVER PERFORMANCE
The release of PeopleSoft Pure Internet Architecture™ introduces new components to
PeopleSoft architecture—the web server and application server. The application server is where
most shops struggle with appropriate sizing. Web servers are used for handling the end-user
srequests from a web browser to eliminate the administrative costs associated with loading
software (fat clients) on individual desktops. The benefit is a significant savings on software
deployment costs, maintenance, and upgrades. While the shift from fat clients to thin lessens the
administrative burden, it increases the need to ensure the web servers are finely tuned since they
will service a large number of clients. The requirement for these web servers to achieve optimal
performance is vital due to the mission critical-nature PeopleSoft plays in today’s enterprise.
Recommendations for ensuring good performance for your web servers:
•
Ensure your load balancing strategy is sound
•
Implement a solution to verify and highlight changes in traffic volumes
•
Closely monitor the response times to verify that your strategy is optimizing your web
servers
•
Measure and review historical patterns on server resource utilization (see server section
above).
• Increase the HEAP size to 200, 250, 300, or 380 MB for the web logic startup script.

4. TUXEDO PERFORMANCE MANAGEMENT
Tuxedo is additional middleware PeopleSoft utilizes to manage the following Internet application
server services:
• Component Processor—Responsible for executing PeopleSoft Components—the core
PeopleSoft application business logic
• Business Interlink Processor— Responsible for managing the interactions with third-party
systems
• Application Messaging Processor—Manages messages in a PeopleSoft system
• User Interface Generator—Generates the user interface based on the Component or Query
definition and generates the appropriate markup language (HTML, WML, or XML) and scripting
language (JavaScript, WMLScript) based on the client accessing the application
• Security Manager—Authenticates end-users and manages their system access privileges
• Query Processor—Executes queries using the PeopleSoft Query tool
• Application Engine—Executes PeopleSoft Application Engine processes
• Process Scheduler—Executes reports and batch processes and registers the reports in the
Portal’s Content Registry
• SQL Access Manager—Manages all interaction with the relational DBMS via SQL
This Tuxedo middle tier is another critical and influential component of performance. Similar to
the web server, what is needed is a way to see into the “black box” to further understand some of
the key performance metrics.
Some of the performance metrics you want to capture when analyzing tuxedo are:
• Transaction volumes by domain, server, and application
• Response time for each end-user request
• Tuxedo service generating a poor performing SQL statement
• Break down of Tuxedo time by Service time and Queue time
• Identify problem origin – is it in tuxedo or the database?
• Response time comparisons for multiple Tuxedo Server
Our experience has shown that too often companies throw hardware at a Tuxedo performance
problem when a more effective solution can be as simple as adding another domain to the
existing server(s). This is due to the fact that PeopleSoft and Tuxedo lack management solutions
that provide historical views of performance.
5. APPLICATION PERFORMANCE
It is an accepted fact that 80% of application and database problems reside in the application
code. But, there are other technical items to consider which could influence the applications
performance. Here are some specific items to focus on when evaluating your database
environment:
•
•
•
•
•
•
•
•
•
•
Make sure the database is sized and configured correctly
Make sure that the hardware and O/S environments are set up correctly
Verify that patch levels are current
Fix common SQL errors
Review documentation of known problems with PeopleSoft supplied code
Be sure to check available patches from PeopleSoft that might address the problem
Review PeopleSoft suggested kernel parameters
Set up the right number of processes
Review the application server blocking for Long Running Queries
Make sure you don’t undersize your version 8 application server
It is also recommended to continue to review these items on a periodic basis.
7
6. DATABASE PERFORMANCE
The performance of an application depends on many factors. We will start with the overall general
approach to tuning SQL statements. We will then move to such areas as indexes, performance
monitoring, queries, the Tempdb (Tempdb is often referred to as plain “TEMP”), and, finally,
servers and memory allocation.
To understand the effect of tuning, we must compare ‘time in Oracle’ with ‘request wait time’.
Request wait time is the time that a session is connected to Oracle, but not issuing SQL
statements. In Oracle time shows the amount of time resolving a SQL statement once it has been
submitted to Oracle for execution. If time in Oracle is not significantly smaller than the request
wait time, then application tuning should be examined. Request wait time is almost always much
greater than in Oracle time, especially for on line users, because of think time.
One exception to this is for a batch job that connects to Oracle and submits SQL statements, then
processes the returned data. A greater ratio of request wait to Oracle could indicate a loop in the
application outside of Oracle.
This should be identified and eliminated before continuing the performance analysis.
The next step focuses on tuning the SQL statements that use the most resources. To find the
most resource consuming SQL statements, the scheduled collection approach can be used. The
duration time is a commonly used criteria to locate the offensive SQL statements. Other useful
criteria include the following wait states: I/O, row lock, table lock, shared pool, buffer, rollback
segment, redo log buffer, internal lock, log switch and clear, background process, CPU, memory
and I/O.For each offensive SQL statement, the execution plan and database statistics are
analyzed. The following statistics are important: table and column selectivity, index clustering
factor, and storage parameters. First, all the joins of the SQL are considered. For each join, the
ordering of the tables is analyzed. It is of major importance to have the most selective filter
condition for the driving table. Then, the type of the join is considered. If the join
Represents a Nested Loop, forcing it into a hash join can be advantageous under some
conditions.
The analysis stage usually results in several modification proposals, which are applied and tested
in sequence. Corrective actions include database object changes and SQL changes. The typical
database object changes are:
index change, index rebuild and table reorganization. The typical SQL changes are: replacing
subquery with a join, splitting a SQL into multiple SQLs, and inserting Oracle hints to direct the
Optimizer to the right execution plan.
7. INDEXES
Tuning indexes is another important factor in improving performance in a PeopleSoft
environment. Index maintenance is crucial to maintaining good database performance. Statistics
about data distribution are maintained in each index. These statistics are used by the optimizer to
decide which, if any, indexes to use. The statistics must also be maintained so that the optimizer
can continue to make good decisions. Thus, procedures should be setup to update the statistics
as often as is practical.
Keep in mind that objects that do not change, do not need to have their statistics created again. If
the object has not changed, the stats will be the same. In this case, recreating the same statistics
over again will waste resources.
Since PeopleSoft uses a lot of temp tables that are loaded and then deleted, but not dropped, it is
helpful to create the statistics when those tables are full of data. If the statistics are created when
the table is empty, the stats will reflect that fact. The Optimizer will not have correct information
when it chooses an access path.
Periodically, indexes should be rebuilt to counter index fragmentation. An index creation script
can be created via PeopleTools to drop and rebuild indexes. This procedure will eliminate index wasted space on blocks that are created as a result of Oracle logical deletes. This is only
necessary on tables that are changed often (inserts, updates or deletions).
Index scheme is also important to look at. The indexes in a standard PeopleSoft installation may
not be the most efficient ones for all installations. Closely examine your data’s pattern,
distribution, and modify the indexes accordingly. For example, the index on PS_VOUCHER
(BUSINESS_UNIT, VOUCHER_ID) could be changed to (VOUCHER_ID, BUSINESS_UNIT) for
an implementation with only a few business units. Use ISQLW Query Options (Show Query Plan
and Show Stats I/O) to determine the effectiveness of new indexes. However, be
careful to thoroughly test the new index scheme to find all of its ramifications.
8. QUERIES
It is a good idea to examine queries to try and fix a problem that is affecting the application.
Query analyzer can be used to see optimizer plans of slow SQL statements. Choose
“Query/Display Plan” to see a graphical representation of a query plan. Alternatively, by issuing a
“set showplan_text on” and running the statement you will get a textual representation of the plan,
showing indexes used, the order in which the tables were used, etc.
When investigating queries, worktables created per second should also be addressed. If you see
a large number of work tables being created per second (i.e. hundreds per second), this means
that a large amount of sorting is occurring. This may not be a serious a problem, especially if it
does not correspond with a large amount of I/O.
However, performance could be improved by tuning the queries and indexes involved in the sorts
and, ideally, this will eliminate some sorting.
Recommendations for ensuring good performance for your Database servers:
-
-
-
Avoid using Boolean operators >, <, >=, <=, is null, is not null
Avoid using Not in,! =
Avoid using Like '%pattern', not exists
Avoid using Calculations on unindexed columns or (use union instead)
Avoid using having (use a WHERE clause instead)
Always use Enable aliases to prefix all columns
Always use Place indexed columns higher in the WHERE clause
Always use Use SQL Joins instead of using sub-queries
Always use Make the table with the least number of rows the driving table by making it
first in the FROM clause.
Always Establish a tuning environment that reflects your production database
Always Establish performance expectations before you begin
Always Design and develop with performance in mind
Create Indexes to support selective WHERE clauses and join conditions
Use concatenated indexes where appropriate
Consider indexing more than you think you should, to avoid table lookups
Pick the best join method
Nested loops joins are best for indexed joins of subsets
Hash joins are usually the best choice for "big" joins
Pick the best join order
Pick the best "driving" table
Eliminate rows as early as possible in the join order
Use bind variables. Bind variables are key to application scalability
Use Oracle hints where appropriate
Compare performance between alternative syntax for your SQL statement
Consider utilizing PL/SQL to overcome difficult SQL tuning issues
Consider using third party tools to make the job of SQL tuning easier
Use of Bind Variables:
The number of compiles can be reduced to one per multiple executions of the same
SQL statement by constructing the statement with bind variables instead of literals.
-
Application Engine - Reuse Flag
Application Engine programs use bind variables in their SQL statements, but
these variables are PeopleSoft specific. When a statement is passed to the
database, PeopleSoft Application Engine sends the statement with literal values.
The only way to tell the Application Engine program to send the bind variables is
by activating the ReUse flag in the Application Engine step containing the
statement that needs to use the bind variable.
9. TEMPDB
To ensure that your application is performing at peak efficiency, it is important to look at the
tempdb. The tempdb is used for sorting result sets, either because of an ‘order by’ clause in a
query or to organize pre-result sets needed to execute a given query plan. If tempdb is being
used extensively (evidenced by many work tables being created per second or heavy I/O to
tempdb files), performance can be improved by tuning it.
First, consider moving the tempdb to its own set of disks. Do this with ‘alter database’ using the
‘modify file’ option to specify a new location for tempdb’s data file and log file. It may also be
worthwhile to increase the SIZE option to a larger value, such as 100MB and increase the
FILEGROWTH option to around 50MB.
Another option to consider is adding several data files to tempdb rather than having just one. This
will help reduce contention on the tempdb. Do this by using ‘alter database’ using the ‘add file’
option. As with tempdb’s original data file, increase the SIZE option to a larger value as well as
the FILEGROWTH option.
10. SERVERS AND MEMORY ALLOCATION
The use of an application server is strongly recommended for all on-line connections. The
application server queues incoming requests and dramatically reduces process blocking in the
database. This will not help batch processes, but it will greatly increase the number of on-line
users.
Collecting CPU wait, memory wait, and I/O wait may show the application is having to wait on
server resources. Typically, this indicates an undersized server or that other applications running
on the server were hogging resources. Today, many IT organizations are looking at server
consolidation to reduce the cost of ownership. Taking this approach puts you in a position to
analyze performance over time as an aid to a server consolidation effort.
While it is possible to share a PeopleSoft database server with other applications – database or
otherwise – it is always preferable to dedicate the entire server to the PeopleSoft installation. Any
process running on the server may use resources that could be better utilized by the database
engine or a PeopleSoft process running on the database server. Use of the database server as a
file server can seriously degrade the database response time, thus it is important to dedicate the
entire server to PeopleSoft processes. The single greatest determinant of database server
performance is the amount of memory allocated to it. The memory configuration parameter is
expressed in 2K blocks. Thus, if you wanted to allocate 100 MB you would set the memory to
51200.
Generally, the more memory allocated to a server the better it will perform. The goal is to add
enough memory to increase performance, but not so much that it no longer helps. This
determination can be made through the NT Performance monitor. Monitor the Cache Hit Ratio
and disk usage via the Performance monitor to determine if more memory should be allocated to
the database engine. For the most part, it is better to have too much memory allocated than not
enough with database servers. For Application servers, additional memory usually helps but
having too much could have a negative effect since more memory correlates to longer search
times when there is an operation frequently looking for an object in memory.
11. CONCLUSION:
The presented methods are intended as tips to help better tune your PeopleSoft applications.
These tips are simply suggestions, as mentioned earlier, and they need to be used with caution
as each tip may not apply directly to your situation. However, if used properly, the suggested tips
can tune your PeopleSoft applications to perform at an optimal level.
There are many native tools available to monitor the various components that make up the
PeopleSoft landscape. How effective they are in identifying the root cause of performance
problems is still in question. The ultimate goal is to find a single solution to gain visibility end-toend and in between.
It is a widely known fact that 80% of performance problems are a direct result of the
application code. There are other factors that contribute to poor performance, such as server
configuration, resource contention. Assuming you have tuned your servers and followed the
guidelines for your database server, application server, and web server, most of your
performance problems can be addressed by tuning the PeopleSoft Application code.
Tuning the application can consist of tuning, PeopleCode, SQR code, SQL-intensive code,
queries, nVision, and indexes.
Database Tuning:
Ineffective Indexing
One of the most common performance problems in the PeopleSoft Application is ineffective
indexing against key application tables. As we stated earlier, the PeopleSoft software is
delivered with a generic code set that runs on several database platforms. In addition to the
code set, the indexes that exist are not specific to any one environment. Because of this, you
need to fine-tune your application by selectively finding poor performing applications and
determining whether or not the cause is due to ineffective indexing. This can be achieved by
tracing the SQL of poor performing pages, application engine programs, COBOL, or sqr
programs and finding the long running queries. Once you find the problematic queries that take
a significant amount of time to complete, you will need to analyze the indexes that are being
used.
Here is an example of how to fine-tune your indexes. The Journal Generator application, within
the Financials software, could be a COBOL application (FSPGJGEN) that performs very many
selects based on the run control id parameters. In running this process it is determined that it
is taking approximately 2 hours to process only 50 Journals.
The first thing to do is to turn on tracing for that specific process and re-running the process in
your test environment. Be sure that you always do your tuning in your test environment. You do
not want to blindly start adding indexes to your production environment without performing
full regression testing. The results can be catastrophic. Once you have the trace file, you can
examine it and look for the timings for the long running queries.
After examining the trace file we find the SQL statement that is causing the performance
problem. Once you find the SQL statement, you can run it through your RDBMS query tool to
determine which indexes are being used. If you are using SQL Server, you will issue the
following command:
SET SHOWPLAN_ALL { ON | OFF }
If you are using Oracle you will utilize the explain plan. Once you execute this command, you
can then run your select statement. This returns detailed information about how the
statements are executed and provides estimates of the resource requirements for the
statements, including the indexes that are being utilized.
The next step is to look at the columns in the where clause of the SQL statement and
determine if the indexes being used, if any, contain these columns. If they do not, you can
simply create a new index with the missing columns. Once created re-run your query to reexamine the index usage. Simply repeat this process until you achieve the improved
performance.
In some cases, certain SQL statements will never even use an Index. This is what is called a full
table scan. Full table scans are extremely taxing on the system and cause major performance
degradation. If you determine that a SQL query is performing a full table scan, simply create an
Index or Indexes with the columns that are contained within the where clause.
Tuning and adding indexes is one of the most overlooked and very simple ways to improve
performance. Just remember the following steps.
- Trace
- Examine the SQL
- Analyze the SQL in your RDBMS tool
- Determine Indexes being used
- Create Indexes with Columns in Where clause
- Re-Analyze the SQL and repeat until you get improved results
Another tip for tuning indexes is to try re-ordering columns within the index. You can
sometimes gain huge performance improvements, by simply changing the order of the columns
when you create the index. This is a trial and error method that you will have to test. There is
no hard and fast rule for which column should be placed in what order.
Temporary Tables
PeopleSoft utilizes temporary tables in many of its application programs, especially application
engine programs. These application programs are constantly populated with data and deleted,
over and over. Each time a temporary table is populated and deleted, it causes certain
databases like Oracle to leave the High Water Mark and produces full table scans.
For example, an application engine program can insert 200000 rows and then delete them. The
next time that application runs, it only inserts 2000 rows, yet a read against that table
performs poorly. Additionally, the indexes that exist on these temporary tables are heavily
fragmented from all of the deletes. Temporary tables are a common cause of performance
problems.
In order to prevent fragmentation and improve performance on most used temporary tables,
you should truncate these tables on a regular basis.
You can learn more about tuning your PeopleSoft applications and database from an e-book I
wrote a few years ago titled "Tuning your PeopleSoft Applications and Environment for
Maximum Performance" .
Happy Tuning…
Download