PROJECT REPORT
(Project Semester January 28- July 5)
“Automation for the Product Validation Team of NCSim”
Submitted by
Shubham Singh
100886018
Under the Guidance of
Miss Sakshi Grover
Mr. Amit Aggarwal
Lecturer
Senior Member of Consulting Staff
Thapar University
NCSIM
Department Of
Electronics And Communication Engineering
THAPAR UNIVERSITY, PATIALA
(Deemed University)
1
Jan 28th –July 5th 2010
DECLARATION
I hereby declare that the project work entitled “Product Validation of NCSIM” is an authentic
record of my own work carried out at Cadence Design Systems India Ltd. as requirements of six
months project semester for the award of degree of B.E. (Electronics and Communication
Engineering), Thapar University, Patiala, under the guidance of Mr. Amit Aggarwal and Miss.
Sakshi Grover, during Jan 28th, 2009 to July 5th, 2009.
Shubham Singh
100886018
Date: 5th JULY , 2010
Certified that the above statement made by the student is correct to the best of our knowledge
and belief.
Miss Sakshi Grover
Mr. Amit Aggarwal
Lecturer
Thapar University
Senior Member of Consulting
NCSim Department
2
ACKNOWLEDGMENT
The successful realization of the project is a result of the combined efforts of the people from
disparate fronts. It is only with their support and guidance that the project could be a success.
I would like to express my gratitude to my manager Mr. Amit Aggrawal (Senior Member of
Consulting Staff, NCSIM Department) for his valuable supervision and full cooperation
provided during the course of my project.
I extend my deep sense of gratitude to and my supervisor Mr. Arshad Javed (Member of
Consulting Staff). His valuable advice is one of the important factors that helped me in
developing my project. I sincerely thank him for the technical assistance he had given me
right from the starting of this project and for all the co-operation he provided me during
the development stages.
I extend my thanks to Mr. Dinesh Malik (Software Engineer) for his invaluable technical
guidance and support due to which my project could be a success.
Shubham Singh
100886018
3
Index
1. Introduction…………………………………………………………………………..…… 6
2. Industry…………………………………………………………………………….……... 10
3. About NCSIM…………………………………………………………………….…….… 15
3.1 Debugging…………………………………………………………………….…….….. 15
3.1.1 Compiling Source Files……………………………………………….……….. 15
3.1.2 Elaborating Source Files…………………………………………….………… 16
3.1.3 Simulating Source Files…………………………………………….…………. 17
3.2 Windows in Simvision…………………………………………………….…………… 18
3.2.1 Simvision Tool Bar……………………………………………….………………19
3.2.2 Console/Design Window……………………………………….........................19
3.2.3 Source Browser/ Schematic Tracer…………………………….…………………20
4. PURECOV..………………………………………………………………………………..…22
4.1 Introduction……………………………………………………………………………... ...22
4.2 Key PureCoverage Feature………………………………………………………………...22
4.3 Finding Untested areas of “Hello World”………………………………………………….23
4.4 Sample Program……………………………………………………………………………24
4.5 Running an Instrumented Program……………………………………………………….. 26
5. PURECOV AUTOMATION..……………………….. ………………………………….... 28
5.1 Problem Statement…………………………………….…………………………………. 28
5.2 Languages used and required description………………………….…………………...... 28
5.2.1 C Shell……………………………………………………………..……………. 28
o Basic Statement…………………………………………………………...….29-35
o Criticism of C shell………………………………………………………………35
o Basic Filters…………………………………………………………..………37-38
5.2.2 K Shell……………………………………………………………………………..38
5.2.3 PERL………………………………………………………………………….……39
5.3 Introduction to Rational Clearcase…………………………………………………………39
5.3.1 Purpose of clearcase………………………………………………………………..39
5.3.2 Clearcase Terminology…………………………………………………………….39
5.3.3 Versioned Object Base………………………………………………………….….40
4
5.3.4 Views……………………………………………………………………………....40
5.3.5 Checkout……………………………………………………………...……………40
5.3.6 Checkin…………………………………………………………………………….40
5.4 Script Package for Purecov Automation…………………………………………………...41
5.4.1 CopyMergePurecov…………………………………………..…………………..42
o Use of script……………………………………………………………………...44
5.4.2 RunRegression……………………………………………..……………………..46
o Use of script (switch description)………………………………………………..49
5.4.3 MergePcv…………………………………………………………...…….……….50
o Use of script……………………………………………………………………...51
5.4.4 Report_mail………………………………………………………..…………...…52
o Use of script……………………………………………………………………...56
o View of ‘mail.txt’………………………………………………………………...57
o unusedfunc……………………………………………………..……………….58
o View of ‘UnusedFunc.txt’……………………………………………………….59
6. Script for extraction from hojo and further performance comparison for report……...60
o Use of Script……………………………………………………………………..62
o Output(Snipped)………………………………………………………………….62
7. CCMS………………………………………………………………………………………. 64
7.1 Introduction………………………………………………………………………………..64
7.2 CCMS logical state Model Overflow……………………………………………...………64
7.3 The Fruit System : Second method of filing the CCR……………………………..………65
8. Custom Script for SQL query and further manipulation………………..……………… 67
8.1 Append_finder……………………………………………………………………….…….67
8.2 Append_data_manipulation………………………………………………………………..68
o About the scipt …………………………………………………………………..69
8.2.1 Options for CCMS script Access…………………………………………………..60
8.2.2 Accessing the CCMS script from /grid……………………………………...……..70
8.2.3 ccrprint……………………………………………………………………………..72
9.Script for mailing data of required QUERY to required Personnel……………………. 73
5
9.1 Snapshot of script…………………………………………………………………………73
9.2 Config (snapshot)…………………………………………………………………………77
9.3 Script to determine the name of engineer/manager for particular CCRS………………...78
o Use of script……………………………………………………………………...78
10. Conclusion…………………………………………………………………………………. 79
11. References………………………………………………………………………………… 80
12. Impediments/difficulties faced during project semester on project…………………… 81
6
1. Introduction
Several
years
ago, the best
of
electronics
designs went into
mainframes and
Workstations , high-end military and aerospace applications . Today, complex chips go into
millions of Set-top boxes, PDAs, cellular phones, and other consumer items, all designed
and
brought
out
under
intense
cost and time-to-market pressures. Today, Electronics
touches almost every part of our lives, and is ubiquitous , and, is literally changing
everything
in
and around our
lives, for the better. This is truly, the Golden Age of
Electronics. To keep pace with market demand for more performance and functionality in
today’s mobile phones, digital cameras, computers, automotive systems and other electronics
products, manufacturers pack millions of transistors onto a single chip. This massive integration
parallels the shift to ever smaller process geometries, where the chip’s transistors and other
physical features can be smaller than the wavelength
of
light used to print them. Designing
these complex semiconductor devices is beyond the scope of pen and paper. Electronic design
automation (EDA) enables us to design and manufacture semiconductor devices with such
phenomenal scale, complexity and technological challenges.
1.1 What is EDA?
Electronic Design Automation (EDA) is a complex and highly leveraged technology that
enables the Electronics Industry by handling design complexity; enabling faster time-to
market; higher productivity, accuracy and efficiency.
Electronic Design Automation (EDA) is the category of tools for designing and producing
electronic systems ranging from printed circuit boards ( PCBs ) to integrated circuits. This is
sometimes referred to as ECAD (electronic computer-aided design) or just CAD (computer aided
designing).
Terminology:
The term
EDA is also used as an umbrella term for computer - aided engineering ,
computer -aided design and computer-aided manufacturing of electronics in the discipline
of electrical engineering. This usage probably originates in the IEEE Design Automation
Technical Committee.
7
1.2 Growth of EDA
EDA for electronics has rapidly increased in importance with the continuous scaling of
semiconductor technology. Some users are foundry operators, who operate the semiconductor
fabrication facilities, or "fabs", and design-service companies who use EDA software to evaluate
an incoming design for manufacturing readiness. EDA tools are also used for programming
design functionality into FPGAs.
1.3 EDA Product Areas
EDA is divided into many (sometimes overlapping) sub areas. They mostly align with the path of
manufacturing from design to mask generation. The following applies to chip/ASIC/FPGA
construction but is very similar in character to the areas of PCB designing:
Floorplanning: The preparation step of creating a basic die-map showing the expected
locations for logic gates, power & ground planes, I/O pads, and hard macros.
Logic synthesis: Translation of a chip’s abstract, logical RTL - description (often specified via a
hardware description language, or "HDL", such as Verilog or VHDL) into a discrete netlist of
logic-gate (Boolean logic) primitives.
Behavioral Synthesis, High Level Synthesis or Algorithmic Synthesis: This takes the level of
abstraction higher and allows automation of the architecture exploration process. It involves the
process of translating an abstract behavioral description of a design to synthesizable RTL. The
input specification is in languages like behavioral VHDL, algorithmic SystemC, C++ etc and the
RTL description in VHDL/Verilog is produced as the result of synthesis.
Co-design: The concurrent design, analysis or optimization of two or more electronic systems.
Usually the electronic systems belong to differing substrates such as multiple PCBs or Package
and Chip co-design.
EDA Databases: Databases specialized for EDA applications. Needed since historically general
purpose databases did not provide enough performance.
8
Simulation: Simulate a circuit’s operation so as to verify correctness and performance.

Transistor Simulation: Low level transistor simulation of a schematic/layout's behavior,
accurate at device-level.

Logic simulation: Digital simulation of an RTL or gate netlist's digital (Boolean 0/1)
behavior, accurate at Boolean level.

Behavioral Simulation: High level simulation of a design's architectural operation,
accurate at cycle-level or interface-level.
Hardware emulation: Use of special purpose hardware to emulate the logic of a proposed
design. Can sometimes be plugged into a system in place of a yet to be built chip. This is called
in circuit emulation.
Clock Domain Crossing Verification (CDC check): Similar to linting but these checks/tools
specialize in detecting and reporting potential issues like data loss, meta-stability due to use of
multiple clock domains in the design.
Formal verification and Model checking: Attempts to prove, by mathematical methods, that
the system has certain desired properties, and that certain undesired effects (such as deadlock)
cannot occur.

Equivalence checking: Algorithmic comparison between a chip's RTL description and
synthesized gate-netlist, to ensure functional equivalency at the logical level.

Power analysis and optimization: Optimizes the circuit to reduce the power required

Static timing analysis: Analysis of the timing of a circuit in an input independent
manner, hence finding a worst case over all possible inputs.

Transistor layout: For analog/mixed signal devices. Sometimes called polygon pushing.
A prepared-schematic is converted into a layout-map showing all layers of the device.

Design for Manufacturability: Tools to help optimize a design to make it as easy and
cheap as possible to manufacture.
9

Design closure: IC design has many constraints, and fixing one problem often makes
another worse. Design closure is the process of converging to a design that satisfies all
constraints simultaneously.

Physical verification: Checking if a design is physically manufacturable, and that the
resulting chips will not have any function preventing physical defects, and will meet
original specifications.

Design rule checking, DRC: Checks a number of rules regarding placement and
connectivity required for manufacturing.

Layout versus schematic, LVS: Checks if designed chip layout matches schematics
from specification.

Layout extraction, RCX: Extracts netlists from layout, including parasitic resistors
(PRE), and often capacitors (RCX), and sometimes inductors, inherent in the chip layout.

Automatic test pattern generation, ATPG: Generates pattern data to systematically
exercise as many logic gates and other components as possible.

Built in self-test, BIST: Installs self-contained test controllers to automatically test a
logic (or memory) structure in the design

Design for Test, DFT: Adds logic-structures to a gate-netlist, to facilitate postfabrication (die/wafer) defect testing.

Technology CAD, or TCAD: Simulates and analyses the underlying
process
technology. Semiconductor process simulation, the resulting dopant profiles, and
electrical properties of devices are derived directly from device physics.

Electromagnetic field solver: Solves Maxwell’s equations directly for cases of interest
in IC and PCB design. They are known for being slower but more accurate than the
layout extraction above.
10
2. Industry
About Cadence
2.1 Mission
“Be and be recognized as the most indispensable
design partner to the electronics industry”
2.2 Brand Promise
“Invent new way of enabling customers
to achieve breakthrough results”
2.3.1 Cadence Profile
Cadence Design Systems, Inc (NASDAQ: CDNS) was founded in 1988 by the merger of SDA
Systems and ECAD, Inc. Cadence Design Systems , headquartered in San Jose, California, is one
of the world's leading suppliers of electronic design technologies and engineering services in the
electronic design automation (EDA) industry. The primary corporate product is software used
to design chips and printed circuit boards.
For years it has been the largest company in the EDA industry. Cadence customers use its
software, hardware, and services to overcome a range of technical and economic hurdles.
Cadence technologies help customers create mobile devices with longer battery life. Designers of
ICs for game consoles and other consumer electronics speed their products to market using our
hardware simulators to run software on a ‘virtual’ chip long before the actual chip exists.
Cadence bridges the traditional gap between chip designers and fabrication facilities, so that
manufacturing challenges can be addressed early in the design stage. And company’s custom IC
11
design platform enables designers to harmonize the divergent worlds of analog and digital design
to create some of the most advanced mixed-signal system on chip (SoC) designs. These are just a
few of the many essential Cadence solutions that drive the success of leading IC and electronic
systems companies.
Cadence employs approximately 5,000 people and reported 2008 revenues of approximately
$1.04 billion. In November 2007 Cadence was named one of the 50 Best Places to Work in
Silicon Valley by San Jose Magazine. In 2008, Cadence ranked 18th on the list of largest
software companies in the world. Cadence's major competitors are Synopsys, Mentor
Graphics and Magma Design Automation.
Cadence Design Systems is the world’s largest supplier of EDA technologies and engineering
services. Cadence helps its customers break through their challenges by providing a new
generation of electronic design solutions that speed advanced IC and system designs to
volume. Cadence solutions are used to accelerate and manage the design of semiconductors,
computer systems, networking and telecommunications equipment, consumer electronics, and
a variety of other electronics - based products.
Cadence has its sales offices, design centers, and research facilities around the world.
2.3.2 What Does Cadence do?
There are people in the electronics industry who design Integrated Circuits (IC's) and Printed
Circuit Boards (PCB's). These engineers are commonly referred to as Electronic designers. They
decide how the IC's and PCB's can most efficiently and effectively be designed to perform their
intended functions. Today IC's and PCB's have become so increasingly complex that it is
virtually impossible to design them manually.
Cadence provides software programs for electronic designers allowing them to design their
products with greater efficiency and more precision. The term EDA software stands for
Electronic Design Automation software. Cadence provides EDA software to electronic design
engineers and is one of the leading companies in the EDA industry.
12
Cadence does most of its business with other electronic companies. Its EDA software tools allow
them to design the products that they pass on to individual consumers around the globe. EDA
technology is used to design chips and such EDA tools used to design chips are created by
Cadence. In today’s world chips are manufactured using EDA technologies. Electronic products
such as small cell phones, PDA’s and laptops would simply be not possible without EDA.
2.3.3 Cadence’s Different Platform

Virtuoso Platform: Tools for designing full-custom integrated circuits; includes
schematic entry, behavioral modeling (Verilog-AMS), circuit simulation, full custom
layout, physical verification, extraction and back-annotation. Used mainly for analog,
mixed- signal RF and standard cell designs, but also memory and FPGA designs.

Encounter Platform: Tools for creation of digital integrated circuits. This includes floor
planning, synthesis, test, and place and route. Typically a digital design starts from
Verilog netlists.

Incisive Platform: Tools for simulation and functional verification of RTL including
Verilog, VHDL and SystemC based models. Includes formal verification, formal
equivalence checking, hardware acceleration, and emulation Verification IP (VIP) for
complex protocols including PCI Express, AMBA, USB, SATA, OCP and others.

Allegro® Platform : Tools for co-design of integrated circuits, packages, and PCBs,

OrCAD/Pspice: Tools for smaller design teams and individual PCB designers. In addition
to EDA software, Cadence® provides contracted methodology and design services as
well as silicon design IP, and has a program to make it easier for other EDA software to
interoperate with the company.
2.3.4 Cadence Activities in India
Cadence Design Systems (India) was established in 1988 as an R&D site at Noida, on the
outskirts of New Delhi. With more than 500 employees, it is now the largest research and
development site outside of North America. It includes such R&D groups as PSD, Custom IC,
13
digital IC, DFM, and SFV as well as Customer Support and IT. The sales and marketing
organization was established in 1997 in Bangalore, India.
Cadence Design Systems (India) Pvt. Ltd. at Noida is the largest R&D Center of Cadence Design
Systems Inc. outside the US.
Key Activities:

Responsible for developing several critical and mainstream technology products across
the entire spectrum of electronic and system design automation.

Cadence India serves customers in the local market, and supports customers worldwide
through its Customer Response Center. Its Methodology Services group helps create
Intellectual Property to optimize the use of Cadence tools in customers' design flows.

The IT group in India provides systems support to all the Cadence offices across the
world.

Cadence India plays a pioneering role in promoting the development of electronics
industry in India.

Cadence India is also creating partnership programs with premier engineering institutes
in India including software grants, joint product development programs and fellowship
programs.

Cadence India plays a key role in promoting the annual VLSI Design Conference and
VLSI Design and Test (VDAT) conference.

Cadence India is focused on developing design automation solutions to address the needs
of the 65-/45nm technology process nodes by leveraging Cadence's leadership in design
automation technology and their partner's leadership in electronic design, process
technology, and chip manufacturing.
14
Cadence Design Systems (India) has evolved to be a leader in technology at the international
level, through representations through forums like the VITAL TAG, the IEEE Timing
subcommittee, which is responsible for defining the VITAL standard (VHDL Initiative towards
ASIC Libraries) and in the synthesis Inter-operability Working Group (SIWG) set up under the
auspices of VHDL International.
Offices:
The Corporate Resource Centre is based in Noida. It includes facilities for research and
development as well as IT support for Cadence worldwide and Global Customer Care. India
Field Operations has offices in Bangalore, Noida and Hyderabad that provide sales and
engineering support across all Cadence platforms and for industry initiatives.
About Corporate Vice President and Managing Director:
Jaswinder Ahuja:Jaswinder Ahuja has been with Cadence since 1988, assuming responsibility as
Managing Director of Cadence's India operations in May 1996 and the additional role of leading
Strategic Partnerships in November 2002. Under his leadership, the engineering center in India
has grown significantly and clearly established its leadership in various design automation
technology areas.
Ahuja is focused on developing design automation solutions to address the needs of the 65-/45nm technology process nodes by leveraging Cadence's leadership in design automation
technology and our partner's leadership in electronic design, process technology, and chip
manufacturing.
15
3. About NCSIM
Cadence Design Systems is one of the world's leading suppliers of electronic design
technologies and engineering services in the electronic design automation (EDA) industry. The
primary corporate product is software used to design chips and printed circuit boards
One of the most sought after products of Cadence Design Systems is NCSIM. It is a simulator
Under which there are three simulators: Compiler, Elaborator, and Simulator.
System:
A system exists and operates in time and space.
Model:
A model is a simplified representation of a system at some particular point in time or space
intended to promote understanding of the real system.
Simulation:
A simulation is the manipulation of a model in such a way that it operates on time or space to
compress it, thus enabling one to perceive the interactions that would not otherwise be apparent
because of their separation in time or space.
3.1 Debugging
You can debug problems in your design in text mode by entering Tcl commands in a window, or
in GUI mode using the SimVision analysis environment. The SimVision environment consists of
a main window, the SimControl window, which lets you interact directly with the simulator.
SimControl includes several debug tools that let you observe the value of selected signals,
traverse the design hierarchy, trace signals to find the source of a problem, step through the
simulation cycle to debug delta cycle bugs, and view RTL models in schematic form.
3.1.1 Step 1: Compiling Source Files
16
Compile your HDL source files by entering the ncvlog command (for Verilog) or the ncvhdl
command (for VHDL) in a command window.
Compilation Process:
Syntax:
ncvlog [options] filename [filename ...]
ncvhdl [options] filename [filename ...]
Examples:
ncvlog -messages adder.v
ncvhdl -messages -v93 adder.vhd
We can use options with these commands like here, -v93 means to Enable VHDL-93 features
3.1.2 Step 2: Elaborating the Design
17
The elaborator, ncelab, constructs a design hierarchy based on the instantiation and configuration
information in the design, establishes signal connectivity, and computes initial values for all
objects in the design. The elaborated design hierarchy is stored in a simulation snapshot, which
is the representation of your design that the simulator uses to run the simulation.
Use the ncelab command to elaborate the design. The argument is the Lib.Cell:View name of the
compiled top-level design unit.
Elaboration Process :
Syntax:
ncelab [options] [Lib.]Cell[:View] ...

Lib is the name of the library that contains the top-level unit.

Cell is the name of the top-level architecture or module.

View is the view name of the top-level unit.
Examples:
18
ncelab worklib.top:configuration
ncelab -messages worklib.adder_top:behav
ncelab -notimingchecks work.cpu_top:rtl
3.1.3 Step 3: Simulating the Snapshot
After you have compiled and elaborated your design, you can invoke the simulator, ncsim. This
tool simulates Verilog and VHDL using the compiled-code streams to execute the dynamic
behavior of the design.
ncsim loads the snapshot as its primary input. It then loads other intermediate objects referenced
by the snapshot. The outputs of simulation are controlled by the model or debugger. These
outputs can include result files generated by the model, Simulation History Manager (SHM)
databases, or Value Change Dump (VCD) files.
Simulation Process:
Syntax: ncsim [options] snapshot
Example: ncsim -gui worklib.adder_top:behave
19
Options that we can use :

-gui -- Invoke simulator with the SimVision environment.

-tcl -- Run in command-line mode.
We use SIMVISION to view the simulation results generated by NCSIM.

NC-Sim for simulation.

Sim Vision for visualization.
3.2 Windows in Simvision
Use the Send to toolbar to export (send to) selected objects or scopes to a set of other windows.

When multiple instances of a window are allowed, the default destination window (e.g.,
target window) is denoted by an icon in the lower-left corner while others are denoted by.
To toggle a window’s target status, just click its target icon.
3.2.1 Simvison Toolbar:
A cursor control toolbar :
Instead of updating values continuously, you can decide not to Watch Live Data
A Simulation Control toolbar:
20
You can control the connected simulation with the run button (or Simulation—Run or F2),
interrupt button (or Simulation—Stop), reset button (or Simulation—Reset to Start), run step button (or Simulation—Step or F6), and run -next button (or Simulation—Next or F6).
You can run the simulation until the end of the simulation or until the next breakpoint or run the
simulation for a specific time. Use the Stop icon to stop the simulation upon activity of the
selected object.
3.2.2 Console / Design Browser :
Use the Console Window to interact with the simulator and the simvision analysis environment
or look at the standard output of the simulator.
Use a Design Browser window to browse the design hierarchy and objects to send them to other
SimVision windows :
3.2.3 Source Browser / Schematic Tracer :
21
Use the Source Browser Window to browse your source cord or examine the scope definition
declaring the selected objects.
The Schematic Tracer window displays abstract RTL models and gate-level designs in
schematic form:
These are the windows that get open when simvision gets invoked.
22
4. PURECOV
PureCoverage is a test-coverage monitoring program that is both effective and easy to use. Once
PureCoverage is installed, you can immediately start using it on your applications by adding the
word purecov to your link line.
4.1 INTRODUCTION
Test coverage data is a great help in developing high-quality software, but most developers find
coverage tools and the data they produce too complex to use effectively. PureCoverage is the
first test coverage product to produce highly useful information easily. During the development
process, software changes daily, sometimes hourly. Unfortunately, test suites do not always keep
pace. PureCoverage is a simple, easily-deployed tool that identifies the portions of your code that
have not been exercised by testing. PureCoverage lets you:

Identify the portions of your application that your tests have not exercised

Accumulate coverage data over multiple runs and multiple builds

Merge data from different programs sharing common source code

Work closely with Purify, Rational’s run-time error detection program, to make sure that
Purify finds errors throughout your entire application

Automatically generate a wide variety of useful reports
Access the coverage data so you can write your own reports One of the keys to high quality
software is comprehensive testing and identification of problem areas throughout the
development process. PureCoverage provides the information you need to identify gaps in
testing quickly, saving precious time and effort.
4.2 Key PureCoverage features
PureCoverage’s unique capabilities include:

Faster analysis. PureCoverage helps reduce time spent determining problem areas of
your code. PureCoverage’s outline view provides detailed, yet manageable, coverage
information at the executable, library, file, function, block and line levels. A point-andclick interface provides immediate access to increasing levels of detail and lets you view
annotated source with line-by-line statistics.
23

Comprehensiveness. PureCoverage monitors coverage in all of the code in an
application including third-party libraries, even in multi-threaded applications. It
identifies critical gaps in application testing suites, providing accumulated statistics over
multiple runs and multiple executables.

Customized output. PureCoverage allows you to tailor coverage output to your specific
needs using report scripts. For example, you can use the scripts to generate reports that:
Inform you when coverage for a file dips below a threshold. Show differences between
successive runs of an executable, making improvements immediately obvious. Identify
the appropriate subset of test cases that must be run to exercise the changed portions of
your program You can use the scripts as-is, modify them, or follow them as models for
your own scripts.

Support for your tools and practices. PureCoverage works behind the scenes with
standard tools and compilers. PureCoverage supports shared libraries and all standard
industry compilers
4.3 Finding Untested Areas of Hello World
This chapter describes how to use PureCoverage to determine which parts of your program are
untested. It steps you through an analysis of a sample hello_world.c program, telling you how to:

Compile and link your program under PureCoverage to instrument the program with
coverage monitoring instructions

Run the program to collect coverage data

Display and analyze the coverage data to determine which parts of your program were
not tested

Improve coverage for the program

Modify your makefiles to use PureCoverage throughout your development cycle
The chapter concludes with a behind-the-scenes look at how PureCoverage works, and a
discussion of how PureCoverage handles more complex programming situations.
24
4.4 Sample Program
Begin your analysis of the sample “hello_world.c” program by copying the program file into
your working directory. Then instrument the program with PureCoverage and run it.

Create a new working directory. Go to the new directory, and copy the hello_world.c
program and related files from the <purecovhome>/example directory:
% mkdir /usr/home/pat/example
% cd /usr/home/pat/example
% cp <purecovhome>/example/hello* .

Examine the code in hello_world.c. The version of hello_world.c provided with
PureCoverage is slightly more complicated than the usual textbook version:
#include <stdio.h>
void display_hello_world();
void display_message();
main(argc, argv)
int argc;
char** argv;
{
if (argc == 1)
display_hello_world();
else
display_message(argv[1]);
exit(0);
}
void
display_hello_world()
{
printf("Hello, World\n");
}
void
25
display_message(s)
char *s;
{
printf("%s, World\n", s);
}

Compile, using the -g debugging option, and link the program. Then run the program:
% cc -g hello_world.c
% a.out
Verify that it produces the expected output:
Hello, World
o Note: If you compile your code without the -g option, PureCoverage provides
only function-level data. It does not show line-coverage data.

Now add purecov at the beginning of the compile-and-link line:
% purecov cc -g hello_world.c
A message appears, indicating the version of PureCoverage that is instrumenting the
program:
“PureCoverage 4.4 Solaris 2, Copyright 1994-1999 Rational Software Corp.
All rights reserved.
Instrumenting: hello_world.o Linking”
26
4.5 Running an instrumented program
1 You now have a PureCoverage-instrumented executable. Run it:
% a.out
Typical output is:
4.6 Program output
In addition to the PureCoverage start-up banner and recording message, the program produces its
normal output, just as if it were not instrumented. For programs that do not ordinarily produce
console output, PureCoverage displays only the start-up banner and the recording message. You
can redirect all PureCoverage output to a file by using the -log-file option
4.7 Coverage data
When the program a.out completes execution, PureCoverage writes coverage information for the
session to the file a.out.pcv. Each time the program runs, PureCoverage updates this file with
additional coverage data.
4.8 Internal Functioning – Why to Instrument
PureCoverage inserts usage-tracking instructions into the object code of your application. After
the compiler creates the object files for your application, PureCoverage instruments the object
27
files, using Object Code Insertion (OCI) to add the monitoring instructions. The instrumented
object files, or cache files, are given new names so that your original object files are not
modified. PureCoverage passes the cache files, complete with the instrumented versions of any
libraries required for the application, to the linker, in place of the original object files.The cache
file names always include pure and an encoded PureCoverage version number. The names can
also include information about the size of the original file, or the name and number of the
operating system.
28
5.Purecov Automation
5.1 Problem Statement
“Coverage analysis for any software product helps in finding testing gaps and assures high
quality of product. NCSIM has been using ‘PureCov’ as coverage analysis tools for past few
years. With the number of testcases increasing day by day, with more and more complexity
coming in product, turn around time for coverage number generation is increasing. In addition,
this process requires manual intervention e.g. merging of pcv files, instrumenting binaries, firing
vobs which are not running in Noida FARM manually. We’d like to automate entire process in
order to achieve fast turn around and avoid manual interventions.”
Languages used during automation of Purecov processes :1. C Shell
2. K Shell
3. Perl
5.2 Languages used and Required Description
5.2.1 C Shell
The C shell (csh or the improved version, tcsh, on most machines) is a Unix shell that was
created by Bill Joy while a graduate student at University of California, Berkeley in the late
1970s. It has been distributed widely, beginning with the 2BSD release of the BSD Unix system
that Joy began distributing in 1978.Other early contributors to the ideas or the code were Michael
Ubell, Eric Allman, Mike O'Brien and Jim Kulp.
The C shell is a command processor that's typically run in a text window, allowing the user to
type commands which cause actions. The C shell can also read commands from a file, called a
script. Like all Unix shells, it supports piping, here documents, command substitution, variables,
control structures for condition-testing and looping and filename wildcarding. What
differentiated the C shell, especially in the 1980s, were its interactive features and overall style.
Its new features made it easier and faster to use. And the overall style of the language looked
more like C and was seen as more readable.
29
Today, csh on most machines is actually tcsh, an improved version of csh. As a practical matter,
tcsh is csh: One file containing the tcsh executable has links to it as both "csh" and "tcsh" so that
either name refers to the same improved version of the C shell.
tcsh added filename and command completion and command line editing concepts borrowed
from the Tenex system, which is where the "t" came from. Because it only added functionality
and didn't change what was there, tcsh remained backward compatible[4] with the original C
shell. And though it started as a side branch from the original source tree Joy had created, tcsh is
now the main branch for ongoing development. tcsh is very stable but new releases continue to
appear roughly once a year, consisting mostly of minor bug fixes.
Basic statements
A basic statement is one that simply runs a command. The first word is taken as name of the
command to be run and may be either an internal command, e.g., "echo," or an external
command. The rest of the words are passed as arguments to the command.
At the basic statement level, here are some of the features of the grammar:
o Wildcarding
The C shell, like all Unix shells, wildcards any command-line arguments. If a word
contains wildcard characters, it's taken as a pattern and replaced by list of all the
filenames that match.
* matches any number of characters.
? matches any single character.
[...] matches any of the characters inside the square brackets. Ranges are allowed, using
the hyphen.
[!...] matches any character not in the set.
The C shell also introduced several notational conveniences, since copied by other Unix
shells.
30
abc{def,ghi} is alternation and expands to abcdef and abcghi.
~ means the current user's home directory.
~user means user's home directory.
Multiple directory-level wildcards, e.g., "*/*.c", are supported.
Having the shell do wildcarding was an important decision on Unix. It meant wildcarding
would always work with every command, it would always work the same way and only
the shell would need code to do it. But the decision depended on the fact that Unix could
pass very long argument lists very efficiently through the exec system call that csh uses
to create child processes. By contrast, on Windows, wildcarding is conventionally done
by each application (through C run-time code before main() is invoked) because of its
MS-DOS heritage: MS-DOS only allowed a 128-byte command line to be passed to an
application, making wildcarding by the DOS command prompt impractical.
o I/O redirection
By default, when csh runs a command, the command inherits the csh's stdio file handles
for stdin, stdout and stderr, which normally all point to the console window where the C
shell is running. The i/o redirection operators allow the command to use a file instead for
input or output.
> file means stdout will be written to file, overwriting it if it exists, and creating it if it
doesn't. Errors still come to the shell window.
>& file means both stdout and stderr will be written to file, overwriting it if it exists, and
creating it if it doesn't.
>> file means stdout will be appended at the end of file.
>>& file means both stdout and stderr will be appended at the end of file.
< file means stdin will be read from file.
31
<< string is a here document. Stdin will read the following lines up to the one that
matches string.
o Joining
Commands can be joined on the same line.
; means run the first command and then the next.
&& means run the first command and, if it succeeds with a 0 return code, run the next.
|| means run the first command and, if it fails with a non-zero return code, run the next.
[edit]
o Piping
Commands can be connected together using pipes, which causes the output of one
command to fed into the input of the next. Both commands run concurrently.
| means connect stdout to the stdin of the next command. Errors still come to the shell
window.
|& means connect both stdout and stderr to the stdin of the next command.
o Variable substitution
If a word contains a Dollar sign, "$", the following characters are taken as the name of a
variable and the reference is replaced by the value of that variable. Various editing
operators, typed as suffixes to the reference, allow pathname editing (e.g., to extract just
the extension) and other operations.
o Quoting and escaping
Quoting mechanisms allow otherwise special characters, e.g., whitespace, parentheses,
Dollar signs, etc., to be taken as literal text.
\ means take the next character as an ordinary literal character.
32
"string" is a weak quote. It can enclose whitespace, but variable and command
substitutions are still performed.
'string' is a stronger quote. The entire enclosed string is taken as a literal.
o Command substitution
Command substitution allows the output of one command to be used as arguments to
another.
`command` means take the output of command, parse it into words and paste them back
into the command line.
o Background execution
Normally, when the C shell starts a command, it waits for the command to finish before
giving the user another prompt signaling that a new command can be typed.
command & means start command in the background and prompt immediately for a new
command.
o Subshells
A subshell is a separate child copy of the shell that inherits the current state but can then
make changes, e.g., to the current directory, without affecting the parent.
( commands ) means run commands in a subshell.
o Control structures
The C shell provides control structures for both condition-testing and iteration. The
condition-testing control structures are the if and switch statements. The iteration control
structures are the while and foreach statements.
 if statement
There are two forms of the if statement. The short form is typed on a single
line and but can specify only a single command if the expression is true.
33
if ( expression ) command
The long form uses then, else and endif keywords to allow for blocks of
commands to be nested inside the condition.
if ( expression1 ) then
commands
else if ( expression2 ) then
commands
...
else
commands
endif
If the else and if keywords appear on the same line, csh chains, rather than
nests them; the block is terminated with a single endif.
 switch statement
The switch statement compares a string against a list of patterns, which may
contain wildcard characters. If nothing matches, the default action, if there is
one, is taken.
switch ( string )
case pattern1:
commands
breaksw
case pattern2:
34
commands
breaksw
...
default:
commands
breaksw
endsw
 while statement
The while statement evaluates an expression. If it's true, it runs the nested
commands and then repeats.
while ( expression )
commands
end
o foreach statement
The foreach statement takes a list of values, usually a list of filenames produced by
wildcarding, and then for each, sets the loop-variable to that value and runs the nested
commands.
foreach loop-variable ( list-of-values )
commands
end
o Variables
35
The C shell implements both shell and environment variables. Environment variables,
created using the setenv statement, are always simple strings, passed to any child
processes, which retrieve these variables via the envp[] argument to main().
Shell variables, created using the set or @ statements, are internal to C shell. They are not
passed to child processes. Shell variables can be either simple strings or arrays of strings.
Some of the shell variables are used to control various internal C shell options, e.g., what
should happen if a wildcard fails to match anything.
In current versions of csh, strings can be of arbitrary length, well into millions of
characters.
Criticism of CSH
Though popular for interactive use because of its many innovative features, csh has never been
as popular for scripting. Initially, and through the 1980s, csh couldn't be guaranteed to be present
on all Unix systems. sh could, which made it a better choice for any scripts that might have to
run on other machines. By the mid-1990s, csh was widely available, but the use of csh for
scripting faced new criticism by the POSIX committee, which specified there should only be one
preferred shell for both interactive and scripting purposes and that that one preferred shell should
be the Korn Shell. C shell also faced criticism from others over the C shell's alleged defects in
the syntax, missing features and poor implementation.
Syntax defects were generally simple but unnecessary inconsistencies in the definition of the
language. For example, the set, setenv and alias commands all did basically the same thing,
namely, associate a name with a string or set of words. But all three had slight, but completely
unnecessary differences. An equal sign was required for a set but not for setenv or alias;
parentheses were required around a word list for a set but not for setenv and alias, etc. Similarly,
the if, switch and the looping constructs use needlessly different keywords (endif, endsw and
end) to terminate the nested blocks.
Missing features most commonly cited are the lack of ability to manipulate the stdio file handles
independently and support for functions.
36
The implementation, which used an ad hoc parser, has drawn the most serious criticism. By the
early 1970s, compiler technology was sufficiently mature that most new language
implementations used either a top-down or bottom-up parser capable of recognizing a fully
recursive grammar. It's not known why an ad hoc design was chosen instead for the C shell. It
may be simply that, as Joy put it in an interview in 2009, "When I started doing this stuff with
Unix, I wasn't a very good programmer." But that choice of an ad hoc design meant that the C
shell language was not fully recursive. There was a limit to how complex a command it could
handle.
It worked for most things users typed interactively, but on the more complex commands a user
might take time to write in a script it didn't work well and could easily fail, producing only a
cryptic error message or an unwelcome result. For example, the C shell could not support piping
between control structures. Attempting to pipe the output of a foreach command into grep simply
didn't work. (The work-around, which works for many of the complaints related to the parser, is
to break the code up into separate scripts. If the foreach is moved to a separate script, piping
works because scripts are run by forking a new copy of csh that does inherit the correct stdio
handles.)
Another example is the unwelcome behavior in the following fragments. Both of these appear to
mean, "If 'myfile' does not exist, create it by writing 'mytext' into it." But the version on the right
always creates an empty file because the C shell's order of evaluation is to look for and evaluate
I/O redirection operators on each command line as it reads it, before examining the rest of the
line to see if contains a control structure.
This one works!
This always gives an
empty file
37
Basic Filters In CSH(UNIX)
1. grep : grep is a command line text search utility originally written for Unix. The name is taken
from the first letters in global / regular expression / print, a series of instructions in text editors
such as ed. The grep command searches files or standard input globally for lines matching a
given regular expression, and prints them to the program's standard output.
Shows lines after
the line of
matching PATTERN
Shows lines before
the line of matching
PATTERN
It is quiet either
on success or
failure
Provides Line
number
2. cut : Cut out selected fields of each line of a file
-c
The list following -c specifies character positions (for instance, -c1-72 would pass the first
72 characters of each line).
38
-f
The list following -f is a list of fields assumed to be separated in the file by a delimiter
character (see -d ); for instance, -f1,7 copies the first and seventh field only. Lines with no
field delimiters will be passed through intact (useful for table subheadings), unless -s is
specified. If -f is used, the input line should contain 1023 characters or less.
-d
The character following -d is the field delimiter (-f option only). Default is tab. Space or
other characters with special meaning to the shell must be quoted. delim can be a multibyte character.
3. tail and head : tail is a program on Unix and Unix-like systems used to display the last few
lines of a text file or piped data. The command-syntax is:
tail [options] <file_name>
head is a program on Unix and Unix-like systems used to display the first few lines of a text file
or piped data. The command syntax is:
head [options] <file_name>
4. awk : AWK is a complete programming language that is designed for processing text-based
data, either in files or data streams, and was created at Bell Labs in the 1970s. The name AWK is
derived from the family names of its authors — Alfred Aho, Peter Weinberger, and Brian
Kernighan. We have used it to transfer “a cut out portion” of file by either “cut or grep” to be
transferred to a variable so as to used for further manipulation.
5.2.2 KSH
K shell has some basic differences in syntax as compared to csh syntax.It has proved to be more
efficient in my work at cadence whenever data manipulation was concerned but at the end I was
more in favour of perl whenever data manipulation was our subject.
39
5.2.3 PERL
Perl is a high-level, general-purpose, interpreted, dynamic programming language. Perl was
originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to
make report processing easier.Since then, it has undergone many changes and revisions and
become widely popular amongst programmers. Larry Wall continues to oversee development of
the core language, and its upcoming version, Perl 6.
Perl borrows features from other programming languages including C, shell scripting (sh),
AWK, and sed.The language provides powerful text processing facilities without the arbitrary
data length limits of many contemporary Unix tools, facilitating easy manipulation of text files.
It is also used for graphics programming, system administration, network programming,
applications that require database access and CGI programming on the Web. Perl is nicknamed
"the Swiss Army chainsaw of programming languages" due to its flexibility and adaptability.
5.3 Introduction to Rational Clearcase
All artifacts produced during development lifecycle will be configurable items. The code and
external deliverables will be stored and managed with the help of tool called “ClearCase”.
5.3.1 Purpose of Clearcase
•
Many people working on the same code
•
Projects delivered in several releases (builds)
•
Rapid evolution of software and hardware
•
Project surveillance: project’s status, bugs, features, etc
•
Concurrent support and development
5.3.2 Clearcase Terminologies
Version Control: It helps in keeping
40
•
Versions of all file types
•
Versions of directories
•
Stores objects in reliable, scalable Versioned Object Bases (VOBs)
•
Elements are read-only until checked out.
•
Unlimited branching and merging.
5.3.3 Versioned Object Bases (VOBs)
A repository that holds files, directories and objects that collectively represent a product or a
development effort. There are two types of vobs:
•
Public VOBs: accessible to everyone in the development community.
•
Private VOBs: accessible to people who create them.
5.3.4 Views
A view is a working area. It provides access to one version of each element in the project. In
Clearcase the elements of view are in a tree format.
5.3.5 Checkout
Whenever you checkout any file you create a private copy of that file. You can make changes in
the file. The changes will not be visible to other till you don’t check in the file.
5.3.6 Checkin
Whenever you check in any file others can view the changes made by you
41
5.4 Script package for Purecov Automation
My automation for Purecov consists of 5 scripts which employ several other script which I will
explain accordingly.
My scripts were placed at my home directory containing :
1. CopyMergePurecov
2. RunRegression
3. MergePcv
4. Report_mail
5. unusedfunc
All of these I’ll be explaining one by one.
42
5.4.1 CopyMergePurecov
43
44
 Use Of This Script
This script is useful in copying the pcv files from the position they are present in after
“Nightly Farm Regression”. In order to access that we need to define a view and set
configuration specification for the release for which we need to generate purecov
number. When you run this script it would first of all bring purecov onto our path and
install it. After this step it would create a directory named “Purecov_pcv” (if it does not
exist already). This directory would contain a .pcv file named ‘output.pcv’ which would
be the merged .pcv file of all the file present in feature’s cache directory which we have
received after “Nightly Farm Regression”.
45
5.4.2 RunRegression
46
47
48
49
 Use Of Script :
This script is used in order to run regression of the testcase bank on to a remote login
machine there are the same steps as CopyMergePurecov which need to be followed when
executing this script.
1. Remote login to a linux machine for example ldvlinux121
2. Make a view build it and set it
3. Set the configuration specification according to the release that is scheduled.
We have already seen that we need to build/instrument our code in order for it to give the
line coverage and the functional coverage of our source code. For that purpose we have to
build all the binaries for which we need to have the final report. In order for that to
happen this script has been provided with a switch. Lets first study the switches this script
comes equipped with.
o Switches
-bank
specify the testcase bank for which you want to run the regression.
-build
If you give this option it would instrument all the binaries otherwise it would pick
up the build and the installation from a default location
After this step of build has taken place with the use of run_pc we’d run the testcases and
based on the name of our release and banks name a .pcv file would be generated for
example “pvincisive_ius92_update_in.pcv”
50
5.4.3 MergePcv
51
 Use of Script :
This script first of all checks whether there is directory by the name of “Purecov_pcv” if
it doesn’t exist then it would prompt the user that regression has not yet happened and it
would be worthless for this script to be run and promptly exit. Otherwise after loading
purecov onto our path it would access Purecov_pcv and merge all the *.pcv which would
be present there.The final pcv file generated is named final.pcv.This pcv is fed to
generate_report.csh and we have 7 reports for the respective tools.
52
5.4.4 Report_mail:
53
54
55
56
 Use of The Script:
This is a data manipulation script whereupon we generate a report consisting summary of
the reports generated through run_pc. After the execution of this script we’d get 7 .rpt
files in our working directory.
These are then compared with the .rpt.au files which have been copied from the
repository area. Depending upon the difference in functional usage compared to “gold
57
file” we slot the functions in a different file called ‘unusedfunclist’ and the report is in
‘mail.txt’ .
 View of ‘mail.txt’
This script has an added option of generating the ‘UnusedFunc.txt’ by using the option –
unused with the script.
58
 Unusedfunc
59
 View of the ‘UnusedFunc.txt’(snipped)
60
6 Script For extraction from /hojo and further comparison for ius92
‘update_in’ and ‘bugfix_in’
61
62

Use Of The Script:
This script generates a mail which contains the performance report of bench files
compared to previous runs. My script with the help of a internal script takes data from the
web address given in the script and performs certain programming maneuvers due to
which we get a table containing the bench file whose todays performance have shown
DEGRADATION.
63
 Output(Snipped)
64
7. CCMS (Cadence Change Management System)
7.1 Introduction:

CCMS = Cadence Change Management System
–
Records in CCMS are called CCRs (CCMS Change Requests)
•
Cadence customization of ClearQuest from IBM/Rational
•
Project scope not limited to core CCMS application:
–
Integration with PCR (Cadence’s previous home-grown product change request
system)
–
Integration with Siebel customer request management (CRM) system - used by
Customer Support to track customer service requests (SRs)
–
Integrations with business group systems (CM, CCB, web sites)
•
primarily by creating strong Unix command line interface (CLI)
7.2 CCMS Logical State Model Workflow
65
A CCR can be filed by the customers as well as the product validation team (P.V. team) i.e. the
internal customers. Each CCR has its own life cycle. The main stages of the CCR are:
1. CCR submitted.
The CCR is filed in this stage by the customer.
2. Planning.
In this planning is done. Planning includes things like expected time in which the
customer requirements will be met, what all features will be included and which release
of the product will contain the desired features.
3. Implementation.
The CCR is fixed in this stage.
4. Product validation.
The product is validated. Validation includes running regressions for and individual test
cases.
5. Release.
The product is released with the desired features. In many cases the customer may ask for
the new features in a very short period of time. In that case hotfixes are provided to the
customer before the actual release of the product. Hotfixes are patches which the
customer can install to update the product in order to get the desired features.
7.3 THE FRUIT SYSTEM : Second method of filing the CCR
It is done by using the option “rdunresolved”. It is done on the unix environment itself. For
example if there is a testcase for which one needs to file a CCR then go to that directory and
simply write the command rdunresolved to file the CCR. The Main point is to properly mention
the details by using the options available which can be got by using the option ‘--help’.
rdunresolved --help
Options are:
-help
Print option summary and exit
-plat
Platform
66
-feature
Feature
-test
Test directory
-file
File name for list of tests
-ccr
Existing CCR number
-family
Family name to use in the CCR
-prod
Product name to use in the CCR
-prodlevel2
-version
Productlevel2 to use in the CCR
Version to use in the CCR
For example:
“rdunresolved –feature feature_name” this much command is necessary to file the right CCR.
Options that can be used apart from this are :

‘rdunresolved –file file_name –feature feature_name” which is used if there are more
than single testcase to be filed in the CCR. Before this one needs to make a file which
will be containing the path of all the testcases.

‘rdunresolved –file file_name –feature feature_name –ccr CCMPR788547’ which is used
when one wants to append the testcases in the already existing CCR.
After filing the CCR by this system, a file named UNRESOLVED.feature_name will be created
e.g. UNRESOLVED.extcl or UNRESOLVED.gui. But in this there is one disadvantage that is
after filing the CCR one needs to go to the CCR through CCMS and need to append the
description about CCR. It’s better than the CCMS system in the way that UNRESOLVED file is
created automatically and it also gets deleted after the CCR is validated
67
8. Custom Script For SQL Query and Further Manipulation
8.1 Append_Finder
68
8.2 Append_Data_Manipulation
69
 About the script:
Cadence has an efficient software in the name of CCMS to file customer request for
change or report redundancy in source code. When such errors are detected by
Product Validation team it is filed in the form of CCR and stored in the database of
the required feature which needs lookout/modification. It has many index fields by
which we could use to extract data from the database by a simple SQL script. But
unfortunately they lacked one particular entry by the name of “Append Time/Append
Date”. This problem was rectified by my script. My scripts call all the CCR which have
been filed by the name of manages amit aggrawal.Out of these CCR we have to find the CCR
which have been modified in the last 15 days. But there are some mandatory steps required
for giving yourself access to CCMS database through command line interface of UNIX.
8.2.1 Options for CCMS script access
To use the Unix command line interface to the CCMS system, you must have access to a
pre-established directory containing the CCMS scripts. The preferred method is to access
the files through the /grid server system. The /grid servers are commonly established at
major Cadence locations and are readily accessible through the Cadence network (locally
and via remote login). They have been established as repositories for software and tools of
common interest to our engineering community.
If you do not have access to a /grid server, you may also access the scripts directly from the
server 'cadence', or you may need to establish your own local server environment. The
following instructions describe the alternative methods available.
To run the scripts, you will either:
1. Run from a /grid server
2. Run the scripts directly from server cadence - /net/cadence/usr/local/cpvt
3. Link to your local group server, if one has been established for your team
4. Setup a local server for your own use
The CCMS script files will be updated, as additional scripts are made available, and as any
70
other updates are required. Therefore, there is a continued need for refreshing the CCMS
script directory contents. Please take this requirement into consideration, if you choose
option 3, or 4, above.
Installation/access can be accomplished in one of the following ways:
1. Filing an IT helpdesk ticket and requesting installation setup assistance from the
CCMS Support Team (file the ticket against ticket category "CCMS/Unix Command
Line")
2. Accessing a /grid server on the Cadence network
3. Run the scripts directly from server cadence - /net/cadence/usr/local/cpvt
4. Accessing a previously configured local server that already contains the CCMS
script directory hierarchy
5. Using the rcp command to copy the CCMS script directory hierarchy from the server
'cadence' to your local system and create your own local server
Updating the files:
1. The /grid data is re-synced with the CCMS host system on a daily basis
2. Files on server 'cadence' are re-synced with the CCMS host system on a daily basis
8.2.2 Accessing the CCMS scripts from /grid
The /grid servers are commonly established at major Cadence locations and are readily
accessible through the Cadence network (locally and via remote login). They have been
established as repositories for software and tools of common interest to our engineering
community. The /grid system is created and supported by the Server Farm Initiative (SFI)
Common Tools group, part of the IT organization. More information on the Server Farm
Initiative is available through the SFI Common Tools web site.
For latest information on /grid site locations, see http://construct/cgi-bin/SFIdb?q=site.
These instructions assume basic Unix command line knowledge. Lines starting with "%" denote
actual UNIX command syntax to be executed.
71
To test accessibility to a grid server, type:
% ls /grid/common/pkgs/ccrtools
If this command returns the message: "/grid/common/pkgs/ccrtools: No such file or directory",
please file a helpdesk ticket against "CCMS/Unix Command Line" and the CCMS support team
will help to determine the best solution for you.
Assuming you have access to /grid (the above command will return a directory list of assorted
filenames stored in the /grid/common/pkgs/ccrtools directory), you will need to update your
.cshrc file to point to the CCMS script files on /grid.
(Please make appropriate modifications to these instructions, if you are using ksh or sh.)
1. In your .cshrc file, set the environment variable CCRDIR to point to the ccrtools
directory on /grid:
% setenv CCRDIR /grid/common/pkgs/ccrtools
2. Add ${CCRDIR} to the search path in your .cshrc file:
% set path = ( . ${CCRDIR} ${path} )
NOTE: If "/grid/common/bin" is included in your current path environment
assignment, make sure it occurs in the list AFTER "/bin:/usr/bin", or you may have
problems with the CCMS scripts. (/grid/common/bin contains utilities like GNU
"sed" and "awk", which can behave differently than versions of these tools from
the standard OS vendors.)
3. Update your path and environment by sourcing your .cshrc file:
% source .cshrc
You should now have access to the CCMS system using the new CCMS Unix command line
scripts. If you have any problems or questions, please file an IT helpdesk ticket against
"CCMS/Unix Command Line", for further assistance.
72
8.2.3 Ccrprint
This command prints the details of the specified ccr on to the standard output i.e. the monitor
screen in case UNIX operators “> or >>”are used after this command then the output is
redirected to the specified path.
73
9 Script For Mailing Data of Required Query To Required Personnel
9.1 Snapshot of the Script (NCQuery)
74
75
76
77
9.2 Config (Snapshot)
78
9.3 Script to Specify the name of the engineer/manager for a particular CCR’s
(dump_name_eng_man)
 Use of the script:
This script checks out the dumped file from the previous two scripts and then extracts
the information of the manager and engineer of the respective CCR. It also checks
whether the name of the manager and engineer is not repeated twice and also for the
case when the name of the engineer and manager coincide it leaves out the repetition.
Thus this script is capable to deliver mails to the concerned persons in an effective
and non repetitive manner.
79
10 Conclusion
“Object Code Insertion” technology of Purecov was effectively utilized to pinpoint workable
areas in our source code. My script package helped in automating a complex and error prone
process and also to file and document the given output. Various SQL queries were fired and
again the incoming data was managed effectively using perl and ksh scripting. The bug which
were met after firing testcases were effectively filed into the CCMS system as CCR. Effective
product validation of NCSim was carried out. The bugs which were found have been fixed for
ius92 release.
80
11. References
Books
1. YOUR UNIX : The Ultimate Guide by Sumitabha Das
2. Software Configuration Management Strategies and IBM(R) Rational(R)
ClearCase(R) : A Practical Introduction (2nd Edition) David E. Bellagio and Tom
J. Milligan.
Internet Sites
1. www.wikipedia.com
2. www.cadence.com
3. www.unix.com
4. http://cdsiweb (internal site of Cadence)
5. www.perl.org
81
11 Impediments/difficulties faced during project semester on project
Work and Suggestions related to work/project semester.
There were some difficulties which I faced. Most of them were faced in the starting phase of the
project to understand the objective of the project. No prior knowledge of VHDL/Verilog resulted
in loss of momentum in intial phase.. It was the guidance provided by my supervisor(Mr. Amit
Aggrawal) and the leadership provided the project manager which made me understand the
fundamentals behind the project and surpass all the difficulties to bring a fruitful end to the
project.
Project semester is very useful part of the curriculum as it enables the students to get an insight
of the industry. Besides this the students also learn to work in a team and face practical
problems.
82