Determining code path

advertisement
Introduction
Unfortunately the z/TPF debugger, Performance Analyzer, Code Coverage Tool and so
on cannot find and solve your code bugs. However, the z/TPF debugger and associated
tools can help you locate and identify your code bugs. The key is to understand what
features are available and to creatively use them together to diagnose your code bugs.
How do you learn about the features of the debugger and associated tools?
 Documentation is available at http://www-01.ibm.com/software/htp/tpf/. See the
Fast links section on the lower left side for the following information:
o TPFUG presentations – A debugger and TPF Toolkit update is often
provided at each TPFUG to announce new features, provide education and
so on. These presentations are usually given in the TPF Toolkit Task
Force or the Development Tools Subcommittee.
o Tools -> z/TPF Debugger ->Debugger for z/TPF Demo Movie – This
demo movie was created several years ago to highlight the function that
was available at that time. Even though this movie is out of date, the
education delivered in this format has been found to be very useful and the
core function described continues to exist today.
o TPF Family Libraries -> Open Current Information Center -> z/TPF PUT
-> Library -> Debugger User’s Guide – The Debugger User's Guide
describes the essence of using the debugger and is a good source for what
function is currently available.
o Tools -> z/TPF Debugger ->z/TPF Redbook - z/TPF Application
Modernization using Standard and Open Middleware Redbook. See
appendices A-D for an in depth step by step discussion for using the TPF
Toolkit, Debugger, Code Coverage Tool, and so on. This may be a good
resource if you are new to the toolset.
o Tools -> z/TPF Debugger - The practical education articles, of which this
article is part, provide explanations of how to use various debugger
features to address debugging situations.
o Tools -> z/TPF Debugger - The developerworks Custom
Communications Packages article provides an explanation of how to
administer customer communication packages to address debugging
situations.
 The TPF Toolkit help that is found in the Help menu also provides information
regarding the features that are available. Select the Help menu -> Help contents.
Then select Debugging TPF Applications, Analyzing Code Coverage of TPF
Applications, or Analyzing Performance of TPF Applications.
 This set of documentation.
 Experimentation – Experimenting with the tool is one of the best ways to
understand how the tool works.
 Ask IBM – You can open a PMR to ask your question or send an email to
tpfqa@us.ibm.com with your question. If your question is involved enough or
you have a large number of questions, ask to have conference call with the experts
to get your questions answered.
The purpose of this documentation is to augment the information currently available in
the Debugger User's Guide and TPF Toolkit help documents by describing the ways in
which the debugger and associated tools can be used to solve problems. The approach of
this documentation is to examine different situations and come up with ideas about how
to use different features of the debugger to diagnose the problem.
Revision History
Revision
0
1
Date
Description
Initial Publication
11/11/2014 Graphical Trace Log compare / Description of Log
Size Tab in Trace Log / Code Coverage Histogram
view / Updated the Trace Logs filtering section
Determining Code Path
Suppose an application is producing incorrect results, corrupts some state, or otherwise
misbehaves. You might not know the application. How can you use the z/TPF debugger
and associated tools to determine what and where the problem is? The debugger provides
a variety of different methodologies to determine code path, each with their pros and
cons. We will explore the following:
 Trace Log Reports.
 Graphical Trace Log compare.
 Code Coverage Tool
 Using Trace Log and Code Coverage Together
 Debugger functions
o ECB Trace
o stop on all functions
o High Level Breakpoints
o step debug
o animated step into
o Optimized code and narrowing in on the problem.
Note that the techniques described will work for debugging production level or optimized
code.
Trace Log Reports
A trace log is an integrated macro and function trace that provides you parameter values,
return values, macro call details and the path through the code. The trace log facility was
originally designed for the application to call a z/TPF API to start and stop trace log
collection. This entailed modifying your application, and building and loading your
code. For more information, go to
http://publib.boulder.ibm.com/infocenter/tpfhelp/current/topic/com.ibm.ztpfztpfdf.doc_put.cur/gtpp1/htlog.html
The z/TPF Debugger was enhanced to provide a convenient means to activate and
deactivate trace log without having to make any programmatic changes to the application.
1) Create a debugger session for the entry point of your application. Start the TPF
Toolkit, switch to the Remote System Explorer Perspective, and switch to the Remote
Systems view. Expand your TPF connection, right click on the Debugger Subsystem and
choose new-> New Session. Fill in the registration details to capture the earliest entry
point in your application possible. Click Next.
Figure : New Debugger Registration Entry
The next pane allows you to name your debugger registration session. Enter a name.
Figure : Naming a Debugger Registration Entry
Because the debugger session is being used only to start trace log, you don't need to set
up the remote debug information locators or the source lookup path. Click Finish.
2) Register the debugger session by right clicking the debugger registration entry and
choosing Register.
Figure : Registering the debugger.
3) On the z/TPF system, modify the number of trace logs that can be run simultaneously
on the system by entering: ZASER TRLOG-1 (the default is 0).
4) Drive your application. The debugger will start.
5) In the debugger, turn on trace log by pushing the start trace log collection button.
The button to the right of the start trace log collection button allows you to stop the trace
log collection. Notice that you could use the debugger to navigate to a particular location
in your application, click the start trace log collection button, step over a function call or
so on, click the stop the trace log collection button and have a concise report of what
occurred on a narrow specific path in your application.
Figure : Turning on trace log through the debugger.
NOTE: The trace log feature can also be started from the debug console by entering the
TRLOG command. The debug console interface provides a wider array of options
including writing to tape. In the debug console, enter trlog help for more information.
6) Fill in the location the trace log file that should be written and choose OK.
Figure: Enter the location on the z/TPF system where the trace log report needs to be
stored.
Status is shown in the debug console:
Figure : Trace log status is shown in the debug console.
7) Ensure there are no breakpoints in the breakpoints pane and click Resume to run the
transaction to completion.
Figure : Run the application to completion.
8) A message will appear on the z/TPF system to show processing completed for the
trace log run. Note the name of the file generated.
TLOG0001I 08.25.22 TRACE LOGGING STARTED FOR ECB:10566000,
LOGS WILL BE WRITTEN
TO FILE-/tmp/C941B1ECDE46C351.report+
PPTL0015I: ECB Trace Log Post-Processor has finished.+
9) Expand the files subsystem (a GUI FTP client) and double click the .report file to open
it in the TPF Toolkit .report viewer.
Figure : View source analysis results.
10) The trace log .report viewer shows you the macro, function, and module call
sequence in an indented format so you can understand the high level flow of your
application. This table view also allows you to see the elapsed time, the istream number,
and other details about the application.
:
Figure : Trace log results
The trace log report also shows you the parameters passed to macros and functions and
the return values that might provide some details of the that was taken:
Figure : Trace log showing macro parameters.
Figure : Trace log showing function parameters.
The source tab provides additional information with a textual view in LPEX.:
Figure : Trace log source view.
Trace Log Reports / Filtering
LPEX Filter options:
LPEX provides powerful find and filter options. From the Source tab, enter CTL-f to use
the LPEX find option.
You can type in a regular expression, select the “Regular expression” checkbox, and
Then select “Next” or “Previous” to find each entry.
Or you can select “All” to show only the matching lines.
Filter Options:
To use the filter options, click the New.... button. To show all of the ENTER/BACK
macros, create a filter by clicking the New button at the bottom of the .report file viewer
(shown in the figure above). Name the filter and click Add....
Figure : Creating an LPEX Report Filter.
Select the Macro Record type and fill in the Macro name ENT*C where the '*' character
acts as a wild card. Click OK.
Figure : Creating an LPEX Report Filter Criteria.
Repeat this action for the BACKC macro
Figure : Creating an LPEX Report Filter completed.
Select the new filter from the drop down and click the Run button.
Figure : Result from running an LPEX Report Filter.
Trace Log Reports / Combined View
This button gives you a combined view:
Figure : Trace log combined view source view.
Trace Log Reports / Statistics
You can also analyze the trace and see some statistics:
Figure : Trace log statistical analysis.
Figure : Trace log statistical analysis results
Figure : Trace log statistical analysis.
In the trace log analysis results, there is a tab called "Log size". What does this tab
represent?
Unfortunately, this tab is not intuitively named. This tab itemizes heap
(malloc/calloc/etc) usage. It is similar to the ECBHEAP command in the debugger debug
console view. It lists size of the heap blocks that were allocated and the number of
blocks of that size that were allocated. This can be very useful in diagnosing heap issues
(memory leaks, etc). For example, in the following picture we have 73 malloc blocks of
size 1 byte of the 150 blocks allocated:
This seemed like a strange size of a malloc block to me so I decided to investigate
further. By clicking on the allocation tab, I can see that CTIS or QDB0 owns these
malloc blocks:
Since that didn't help me figure this out, I switched to the source tab and did a search for
calloc until I located an instance of calloc for 1 byte to learn more about who was making
this call:
From the trace log tab, I right clicked and created a filter for calloc based on the
information I saw above:
which resulted in the following (the blue arrow shows the filter is applied)
But in this way, you can examine your application's usage of heap.
Graphical Trace Log compare
TPF Toolkit provides a Trace Log feature that captures the execution path of a particular
entry control block (ECB) as it executes on your TPF system. You can then use the full
featured Trace Log editor to view the execution path and use the powerful filtering
capabilities available to better understand your application. The trace log compare feature
compares two execution paths side-by-side in a graphical editor. This allows you to
quickly ascertain points of deviation between two ECB executions and visually identify
the context where the deviation occurred.
You can choose to run the compare from the RSE. Just select the two trace logs you want
to compare and indicate that you want to compare them with each other.
Alternately, open two trace logs. From your trace log tab, select this button:
You can select the other open files from the drop down menu.
Here is the Trace Log compare. Notice that in one trace log, I have an ENTRC to QDB3.
But I do not have that in the second trace log.
I can also choose the next / previous difference buttons to scroll through the differences.
The graphical Trace Log compare is available in TPF Toolkit V4.2.
Code Coverage Tool
The code coverage tool shows you the code that has been executed. The code coverage
tool does not show you code path. For example, it does not show you if a function was
called once or 150 times or which statements in that function were executed on any
particular call to this function. The code coverage tool does show you which statements
were executed at least once for all the times the function was called. There might be
cases in which you can infer code path (for example if you know a function was only
called once) but these are limited and complex because of coding structure. The code
coverage tool is useful to see which paths were not executed. For example, if you get an
application error that can occur on two different paths, you can determine from the code
coverage results if one path is not executed.
Debugger registration and code coverage registration are fundamentally different. Code
coverage registration entries capture code coverage data for every entry control block
(ECB) that executes the registered code even if the execution is performed
simultaneously by many ECBs. Debugger registration catches a single ECB and allows
you to view and manipulate the state of that ECB. As such, the instructions provided
here will yield the best results if this procedure is performed on an isolated system or in a
shared test system environment where you are the only user working with the registered
programs.
1) Create a code coverage registration session for your programs. Start the TPF Toolkit,
switch to the Code Coverage Perspective, and switch to the Remote Systems view.
Expand your TPF connection, right click on the Code Coverage Subsystem and choose
new-> New Session.
Figure : New Code Coverage Filter (registration entry)
Notice that Automatically Perform Source Analysis is turned on for convenience.
Ideally, you'll want to work on a system that has all debug information loaded to the
system. However, you could run the collection, see which modules got hit (size
percentage is greater than 0), load the debug information for those modules and then run
source analysis. Or you can enable and use the Remote Debug Information feature to
automatically load the necessary debug information for you. For ideal source analysis
viewing results, you'll want the code built at optimization level 0. However, the tool will
work for higher levels of optimization.
Click Next. The next pane allows you to name your code coverage registration session.
Enter a name and click Next
Figure : Naming Code Coverage Filter (registration entry)
Since I rebuilt my driver at -O0 and loaded with debug information I won't use the
Remote debug information feature. Click next.
Figure : Code Coverage Remote Debug Information
The next screen is a convenient location to set up your source lookup paths when viewing
the source analysis results. Set up your source lookup path to your source code files and
click Finish.
Figure : Code Coverage Source Lookup Path
2) Start the collection by right clicking the registration entry and choosing Start
collection. (The z/TPF system will automatically register for you as well). Notice that
the results of starting the session are displayed in the Remote Console view.
Figure : Code Coverage Start
3) Drive your application. The code coverage tool will collect details regarding the
execution of your application. You can right click the registration entry and choose show
collection status or enter zddbg CODecoverage display COLlection to verify that your
application execution has been recorded.
4) Save and stop the code coverage collection session. Right click the code coverage
registration session and choose Save and Stop collection.
Figure : Save and stop the code coverage collection results.
5) If you didn't select Automatically Perform Source Analysis, right click the
timestamp entry and choose Perform source analysis. This may take a while. Status is
available from the timestamp entry by right clicking the timestamp entry and clicking
Show statue. Alternatively, you can issue the command ZDDBG COD DISP ALL on the
z/TPF system.
Figure : Perform source analysis for the code coverage collection results.
6) Right click the timestamp entry and choose view results (or double click the timestamp
entry):
Figure : View source analysis results.
7) The code coverage view shows you statistics about the execution of your application
such as which module, objects, and functions have been executed.
The results can be sorted or filtered to make artifacts easier to locate:
Figure : Sorting the Code coverage view.
Figure : Code coverage view with results expanded.
Double click on a file or function to see which lines of code were executed:
Figure : Code coverage view with source files expanded.
Notice on the right side to the right of the scroll bar there is a summary that shows you
the executed and non executed lines. Clicking in here will take you to that location in the
file.
Figure : Code coverage editor view summary.
Figure : Larger view of Code coverage editor.
The Next annotation and Previous annotation buttons allow you to go from section to
section. The drop downs control which sections you traverse.
Figure : Next annotation drop down
Also, you can export the code coverage results to archive the results or perform analysis
in a spreadsheet program.
The code coverage tool allows you to perform a comparison between two or more code
coverage collections. Do the following to compare two code coverage collections:
1) Use the code coverage tool to create a collection based on a run of a driver (use save
and stop collection).
2) Use the code coverage tool to create a collection based on a variation of a run of a
driver (use save and stop collection).
3) Perform source analysis on each collection.
4) View the results of each collection.
5) CTL-left click the collections in the code coverage view to select the sessions to
compare. Right click and choose compare results.
Figure : Initiating a code coverage comparison.
2) The compare results pane shows the difference in execution between the two
collections. There are several display options available such as hiding panes, showing
absolute percentages instead of deltas and so on.
Figure : Code coverage comparison.
3) Right click a source file in the line table and choose compare source analysis results to
see a side by side comparison of the source file from the two different code coverage runs
(the source files do not have to match exactly but it is ideal for the source files to match).
Indicate to the code coverage tool which files are to be used in the comparison.
Figure : Initiating a code coverage comparison on the source file.
4) Examine the code coverage comparison results in the source file editor.
Figure : Viewing a code coverage comparison on the source file.
In this way you can see the concise difference between two or more code coverage
collections. As such, you can see minute differences in the code path taken by the
application.
Further, if you use the debugger while code coverage collection is running, you will
influence the results of the code coverage collection. This is useful in proving that error
and other paths are functioning correctly and have been fully tested.
Code Coverage Histogram
The Code Coverage tooling available in TPF Toolkit is an excellent means of analyzing
test suites and determining the overall quality of your testing efforts. The Code Coverage
feature lets you determine the percentage of code executed at various levels of detail, and
allows for multiple code coverage results to be compared in a graphical comparison
editor.
To use the Code Coverage Histogram, in the code coverage perspective, select up to three
code coverage sessions, right click, and select “Show results summary.”
The Code Coverage histogram feature (Toolkit 4.2.1) lets you view the distribution of
coverage results across user-defined percentage ranges in a graphical manner. This
feature provides a more intuitive way to consume code coverage results for thousands of
modules, objects, and functions. The histogram allows you to control whether you view
the size or line percentages, and also, whether you view the modules, objects, or
functions.
The tab at the bottom allow you to view this as raw data instead of as a chart. This is
very useful in the case that I want to see at a glance what modules have not been executed
as part of my code coverage results. For example, if I am running code coverage to verify
that I have tested all of my segments, from this display, I can quickly see what segments I
have left to test.
A very useful feature is the Range Editor. This will allow you to fine tune what is shown
in your histogram. The default shows 0-20, 21-40, 41-60, 61-80, and 81-100.
Let’s say that I want to focus on modules that have a large portion of their code executed.
I can change the ranges so that I lump all segments that have 1% to 50% executed.
Using Trace Log and Code Coverage Together
Using the trace log and code coverage tools at the same time can give you a fuller
understanding of the code path.
1) Create a code coverage registration entry and start code coverage collection
2) Create a debugger registration entry and register the highest entry point possible.
3) Drive the application.
4) Collect and examine the trace log report.
5) Save and stop the code coverage collection. View the results.
You can use the trace log report to understand the high level path of the application and
investigate the code coverage tool results to see which lines of code were executed by the
application.
There are many ways to use these tools to determine code path. Here are some examples:
 Suppose you want know if a field is used on a particular application path. Use
grep to identify which segments manipulate that field. Use the code coverage tool
to see if those paths are executed. Then use the trace log results to see function
and macro parameters that caused these paths to be taken or not taken. And you
can use the debugger to further investigate a particular path by using Register by
function, SVC, program, User defined, CTEST, system error or so on to start the
debugger at the location you are interested.
 Suppose you are making a change to ported code that you do not know. Use the
code coverage tool to ensure that all paths that call your code are tested. Or learn
how and why your function was not called and why you expected it should have
been.

Suppose you want to know the path difference between two slight variations. Use
the code coverage tool to create two separate runs, one for each variation.
Compare the results with the code coverage comparison feature. Then examine
the trace log data and code coverage data to see how and why the paths deviated.
Debugger functions
The debugger offers a few different tools to understand the path of execution of your
application. The limitation is that you cannot go backwards after some execution has
occurred and you may need to restart your trace to understand something that has
previously occurred.
Debugger views: ECB Trace View
The ECB trace view is available during a debugger session. It is very similar to the trace
log except that the content is in the opposite order.
In the TPF Toolkit version 4.2.1, the ECB Trace View includes indentation.
In this example, I can see that program QDB2 did an ENTRC to QDB3:
I can analyze the ECB trace and see the number of ENTRCs QDB2 has done.
Debugger functions: ECB Trace
ECBTRace is available for any ECB on the system from the ZDECB TR command.
Also, you will see the ECB Trace in z/TPF dumps. It is very similar to the trace log
except that it only shows the last XX entries where XX is specified by your system
administrator. The other difference is that the content is in the opposite order.
The ECBTRace command is available in the debugger through the debug console so you
can view this content at any point in time. ECBTRace can give you a sense of the recent
path of execution. This command also works inside dump viewer and ECB monitor
sessions.
Figure : ECBTRace example
Debugger functions: Stop on all functions
Stop on all function entries behaves as if you set a breakpoint at every C/C++ function
entry point (including TMSPC and PRLGC) and BAL external entry points. In this way
you can click the resume button to run from entry point to entry point to get a sense of the
path through the application code.
Figure : Stop on all functions example
Debugger functions: High Level Breakpoints
Load Breakpoints
Load breakpoints stop the execution of your ECB at the entry point of a module the first
time it is called. Load breakpoints can be used to get a sense of code path at a very high
level. This method may be particularly useful for basic assembler language (BAL)
programs or simple sequential C/C++ code.
Figure : Load breakpoints
Wildcards can be used to stop at multiple locations. For example, using * will stop at the
first entry point to every module. Or use QZZ* to stop at the first entry point to every
module whose name starts with QZZ.
Figure : Load breakpoint with wild card
In this way you can click the resume button to run from entry point to entry point to get a
sense of the path through the application code.
Macro/Macro Group Breakpoints
In much the same way, Macro and Macro Group breakpoints can be used at the macro
level. First we'll look at Macro Groups.
The ENTER macro group will cause the application to stop at every call to ENTxC
macros and BACKC macros
Figure : Macro breakpoint
Figure : Macro breakpoint with ENTER group
Notice that you can specify the executable (module name) and so on to limit where these
breakpoints will stop to refine your debugging.
ALLSVC is a macro group breakpoint that will stop at every SVC call. Running with
this macro group can give you a good sense of what your application is doing from a
z/TPF macro perspective.
DFALL is a macro group that stops the application at z/TPFDF macro (like DBOPN) and
function (like dfopn) calls. This allows you to run from z/TPFDF call to call to quickly
get a sense of the z/TPFDF usage in your application.
A variety of other macro groups exist such as STORAGE (getcc, detac, maloc, and so
on), FIND, FILE, and so on. Also, you can set a macro breakpoint through the same pane
to quickly see the usage of a particular macro.
And you can specify a particular macro to see how it is used throughout your application
on your code execution path.
Once you have created a macro or macro group breakpoint, you can click the resume
button to run from entry point to entry point to get a sense of the path through the
application code.
Defining your own “Stop at my function” breakpoints
Today, you can choose to stop on all functions.
If you would like to refine that so that you only stop at the functions you define, and / or
at certain modules, you can use entry breakpoints.
When you set an entry breakpoint (entry meaning function breakpoint), you get all these
drop downs so you can pick the module, object and function.
There is a checkbox at the top to defer the evaluation of the breakpoint until later. And once you
click defer, you can enter wild cards. As such, you can define your own custom "stop at my
functions" breakpoint.
As you can see in the first picture, you can define multiple of these types of breakpoints to cover
a range of situations. .
Debugger functions: step debug
step debug is a feature that allows you to limit your debugging to a list of specified
modules. In the debug console, use the step debug set command to set up the list of
programs to limit the application stopping to. If you registered by program name, the
registration mask will be used by default.
Figure : stepdebug set command
Toggle on the step debug (step filters) button on.
Figure : stepdebug (step filters) button
Now use the step into button. It will only step in the modules listed in the step debug list
or stop at any breakpoints that you’ve set.
Figure : step into button
IMPORTANT NOTE: Make sure you toggle off the step debug (step filters) button when
you are finished. The setting of the step debug (step filters) button setting is saved. A
pop up box warns you the first time you press the step into button and it is behaving as
step debug. Do not ignore this warning! Many users have thought the debugger was
broken when they simply forgot to turn this step debug feature off.
Debugger functions: animated step into
Another feature that can be used is the animated step into button which automatically
does a step into at a specified time interval. This feature works well to the ECB summary
view (which allows you to see the current state of your application at a glance from a
z/TPF perspective).
Figure : animated step into with the ECB Summary view open
Debugger functions: Optimized code and narrowing in on the
problem.
The previously discussed debugger features can be used to debug optimized code with or
without debug information loaded. Once you have a high level view of your application,
you can begin to use other debugger features to narrow in on the source of your problem.
As you narrow in on the area of the problem, rebuild those segment –O0 and load the
code to have an ideal debugging experience with all available variables and linear code
execution when stepping. Assembler code does not need to be rebuilt, just load debug
information. You can also use the Remote Debug Information feature to have the
debugger automatically load the debug information for you.
A variety of breakpoints can be used to navigate your application such as




Entry (function) breakpoints stop the execution of the application when the
function is executed.
load breakpoints as previously discussed.
macro and macro group breakpoints as previously discussed.
watch breakpoints allow you to stop the application when some piece of data
changes
o value in a variable (ie &i)
o value pointed to by a register (suppose R5 contain 0x1000, then
address 0x1000 is monitor for 4 bytes for a change in value)
Figure : watch breakpoint with the address to watch in a register
o monitor the contents of a register (suppose R5 contains 0x1000, when
the register contents change, the application will be stopped)
Figure : watch breakpoint for the value in a register



o note that watchpoints can be limited to changes to a specific value or
within a specified range.
Line and address breakpoints can also be used.
Run to location is available at a right click in the editor view. Run to location
behaves just like a line breakpoint except that when the debugger stops the
breakpoint does not persist.
Jump to location is available at a right click in the editor view. Jump to
location is not a breakpoint but more the equivalent of changing the PSW
address. Care must be used when using Jump to location as it is very easy to
cause a dump to occur.
The modules view allows you to see all modules that the debugger is currently aware of.
You can add modules by clicking the green plus and entering the module name.
Figure : Modules view
Then expand the given module to see the functions. You can right click a function to set
an entry breakpoint. You can also double click the function or source file to open the
source file and then double click the source file to set line breakpoints.
Use the variables view, monitors, registers view, memory view (XML maps), ECB view,
data level view, DECB view, SW00SR view, and so on to further diagnose the problem
as you walk through the code at a low level.
Conclusion
Several tools exist to show you application path at different levels. The trace log feature
or debugger high level breakpoints are a good place to start to get a high level view of an
applications path. The code coverage tool and debugger are then able to help narrow in
on a problem.
Download