Document 10414698

advertisement
918
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 31, NO. 6, JUNE 2012
SLIDER: Simulation of Layout-Injected Defects
for Electrical Responses
Wing Chiu Tam, Student Member, IEEE, and R. D. (Shawn) Blanton, Fellow, IEEE
Abstract—Logic-level simulation has been the de facto method
for simulating defect/faulty behavior for various testing tasks
since it offers a good tradeoff between accuracy and speed.
Unfortunately, by abstracting defect behavior to the logic level
(i.e., a fault model), it also discards important information
that inevitably results in inaccuracies. This paper describes a
fast and accurate defect simulation framework called SLIDER
(simulation of layout-injected defects for electrical responses).
SLIDER uses well-developed mixed-signal simulation technology
that is conventionally used for design verification. There are
three innovative aspects that distinguish SLIDER from prior
work in this area: 1) accuracy resulting from defect injection
taking place at the layout level; 2) speedup resulting from careful
and automatic partitioning of the circuit into maximal digital
and minimal analog domains for mixed-signal simulation; and
3) complete automation that includes defect generation, defect
injection, design partitioning, netlist extraction, mixed-signal
simulation, and test-data extraction. The virtual failure data
created by SLIDER is useful in a variety of settings that include
diagnosis resolution improvement, defect localization, fault model
evaluation, and evaluation of yield/test learning techniques that
are based on failure data analysis.
Index Terms—Defect modeling, layout analysis, mixed-signal
simulation, volume diagnosis, yield learning.
I. Introduction
GGRESSIVE CMOS scaling has brought about many
benefits that include faster transistors, reduced chip size,
lower power consumption, increased functionality integration,
and others. However, it also creates formidable problems from
the manufacturing perspective. Notable examples include subwavelength lithography-induced defects [2], chemical mechanical polishing (CMP)-induced defects [3], and others. Unlike
the random defects that are caused by particulate contamination, these defects are caused by design–process interactions
and are systematic1 in nature, i.e., they can occur repeatedly
A
Manuscript received July 22, 2011; revised October 27, 2011; accepted
December 11, 2011. Date of current version May 18, 2012. This work was
supported by the National Science Foundation, under Award CCF-0541297.
This work was an extension of [1]. The major extensions include automatic
defect generation and several additional experiments performed to illustrate
applications of SLIDER. This paper was recommended by Associate Editor
C.-W. Wu.
The authors are with the Center for Silicon System Implementation,
Department of Electrical and Computer Engineering, Carnegie Mellon
University, Pittsburgh, PA 15213 USA (e-mail: wtam@andrew.cmu.edu;
blanton@ece.cmu.edu).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCAD.2012.2184108
1 A systematic defect can be due to the fabrication process only or due to
interactions between the design and the process. “Systematic” in this paper
refers to the latter.
in areas with similar layout features. Systematic defects have
substantially increased over the years and have become a
substantial component of yield loss [4]. In addition, scaling
continues to increase process complexity, which inevitably
introduces new failure mechanisms [3]. Traditional test structures, while still very useful to monitor the process, have
diminished in applicability since they are limited in volume
and therefore cannot adequately capture the complex design–
process interactions that are found in the product integrated
circuits (ICs) alone [5], [6]. Diagnosis of failed product ICs has
become an increasingly important channel to gather feedback
about the fabrication process [6]–[19]. This is evident in
several recently published papers that demonstrate innovative
use of volume diagnosis. Ongoing diagnoses are used as yieldlearning techniques to identify systematic defects [6]–[10],
[18], [19] and derive important information that includes, e.g.,
design-feature failure rates [11], [12]. In [13] and [14], the
effectiveness of a given set of DFM rules is evaluated using
volume diagnosis results. Diagnosis is also used as a part
of a yield-monitoring vehicle so that defect density and size
distributions (DDSDs) can be estimated [15]. In [16], [17], and
[20], diagnosis is being used to monitor and control IC quality.
With increasing reliance upon volume diagnosis, the accuracy of diagnosis and the follow-on techniques become increasingly important as well, and therefore should be examined
carefully. Measuring accuracy is conceptually simple, but can
be very difficult to achieve in practice since it requires the
availability of the “correct answer” (i.e., the defect) for a
sufficiently large amount of failure data in order to draw a
statistically sound conclusion. Physical failure analysis (PFA)
is often performed on suspect defect locations isolated by
diagnosis in order to expose and examine the defect to identify
the culprit process step. Therefore, PFA data can provide the
“correct answer.” However, due to its time-consuming nature
[9], PFA is often selectively performed, and thus the data
collected can be insufficient and biased. To make the matter
worse, PFA data are often foundry-confidential and may be
unavailable even to the foundry customers. A better alternative
is to use simulation to generate large amounts of virtual failure
data. This paper is a step in this direction.
A. Motivation
To create virtual failure data, the ability to simulate a defect
(i.e., evaluate the output response of a circuit for the given
input stimulus in the presence of a given defect) is essential.
Moreover, since diagnosis itself is not perfect, a typical
c 2012 IEEE
0278-0070/$31.00 © 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution
to servers or lists, or reuse of any copyrighted component of this work in other works.
TAM AND BLANTON: SLIDER: SIMULATION OF LAYOUT-INJECTED DEFECTS FOR ELECTRICAL RESPONSES
diagnosis outcome consists of one or more candidates. It is
therefore important to have ranking criteria to distinguish these
candidates to reduce ambiguities in the subsequent analyses.
One of the most commonly used criteria is how closely
the simulated response of the candidate matches the actual
failure response. For this ranking criterion to be effective,
the defect must be accurately modeled and simulated. Since
logic-level simulation offers a good tradeoff between speed
and accuracy, most testing or diagnosis tasks operate at the
logic level. The erroneous behavior of a defect must also be
abstracted to the logic level as well, and is typically known as
a fault model. Unfortunately, logic-level abstraction inevitably
discards important information of the defect (e.g., voltage,
resistance, and others), and can therefore result in inaccuracies.
For example, it is well known that an open defect can exhibit
location-dependent behavior in the same wire segment [21],
[22]. Another example would be a Byzantine bridge [23]
that confounds logic-level diagnosis. These defect behaviors
actually provide important clues about the location of the
defects, but unfortunately cannot be naturally captured or
simulated by a conventional logic-level simulator.
To mitigate this problem, the concept of mixed-signal
simulation is introduced in [1]. This concept is not new
and has been used extensively in the design community for
years to achieve a reasonable speed-accuracy tradeoff for both
circuit design and design methodology validation [24]. In IC
design, it is often the case that certain parts of the circuit
(such as an analog block) require extensive simulation and
detailed analysis, while other parts of the circuit (such as
a digital block) can be simulated at a much higher level of
abstraction without significant loss of accuracy, with respect
to the parameters of interest. Mixed-signal design tools can
simulate at varying levels of abstraction for different blocks
of the design simultaneously within a single framework using
appropriate signal conversion at the block interfaces.
The same concept can be applied to defect simulation. To
accurately capture defect behavior, only the defect site and a
few of its preceding and subsequent logic stages need to be
simulated accurately. This is possible since any intermediate
voltage level that results due to the defect often becomes
logically well defined (either logic 1 or logic 0) in subsequent
logic stages, and therefore does not require circuit-level simulation accuracy from that point onward. Thus, simulation of the
remaining portions of the circuit can be carried out completely
in the digital domain. Fig. 1 illustrates this concept. Due to the
presence of a bridge defect (represented as a resistor inside the
shaded circle), the pull-up network in the OR gate and the pulldown network of NAND gate are in contention for the driver
input values applied, resulting in the unknown logic value X
for both wire networks affected by the bridge. However, all
signals have well-defined, valid logic values after two stages
of logic. In other words, only the region inside the dotted rectangle requires circuit-level (i.e., detailed analog) simulation.
Not only can the concept of mixed-signal simulation be
applied to defect simulation but the existing mixed-signal
simulation tools can also be directly used. In fact, it is more
efficient and robust to use the existing tools, rather than to
develop a new tool from scratch. This is because mixed-
919
Fig. 1. Circuit region that contains indeterminate logic value (X) due to
the presence of a bridge defect. In this example, all intermediate values are
assumed to become well-defined logic values after two stages of logic.
signal simulation is a mature technology with associated
modeling languages (e.g., Verilog-AMS [18]) and commercial
simulators (e.g., Cadence AMS Designer [19]). Using these
tools, however, requires careful partitioning of the design into
digital and analog blocks. Specifically, all regions that require
good simulation accuracy are consolidated into a single analog
block. Changes can then be made to the analog block to
represent the presence of one or more defects. The rest of the
design remains as a digital block. Once the design is properly
partitioned, the mixed-signal simulator can be directly used.
B. Prior Work
The need for more accurate defect modeling has motivated
several prior works in this area. CODEF [25]–[27] is a process
simulator that was applied to gates/cells or small pieces of
the layout in order to predict the impact of a particulate
contamination on the layout geometry. The impact can then be
mapped to a circuit-level representation that can be simulated
for improving diagnosis [25] and understanding defect behavior [26], [27]. The inductive fault analysis tool CARAFE [28]
identifies likely faults in a circuit and creates corresponding
circuit-level netlists, taking into account the layout, the defect
characteristics, and the fabrication technology. In [29], a 4-bit
arithmetic logic unit (ALU) was used as a base design for
investigating a diverse set of realistic defects (modeled as
changes in the circuit-level netlist) that are crafted specifically
to benchmark diagnosis techniques. The work in [30] simulated transistor-level defects using SPICE and abstracted the
resulting behavior to logic level by using a new algebra. The
work in [31] and [32] used mixed-level fault simulation to
simulate a defect more accurately while maintaining a tolerable simulation speed. Unfortunately, the work in [25]–[27]
and [29] depended on simulating the entire circuit at the
transistor level and therefore was not scalable. The work
in [30]–[32], while scalable, did not consider layout and
therefore sacrificed accuracy. In addition, the variety of defects
considered in [30]–[32] was quite limited.
C. Our Contribution
By making use of a commercial mixed-signal simulator and
performing defect injection at the layout level, the framework
described here solves the scalability problem with no loss of
accuracy and thus enables new applications as the result of
the efficiency improvement. This framework, which we call
SLIDER (simulation of layout-injected defects for electrical
responses), also supports multiple defect injection. This is im-
920
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 31, NO. 6, JUNE 2012
portant since defect multiplicity increases [33] with escalating
process complexity.
While it is true that many defect behaviors can be mapped
to the logic level [34] to gain scalability, the mapping process
often requires precharacterization of the defect behaviors [30].
In addition, logic-level abstraction will not be able to capture
or simulate defect behavior that is both a function of the tests
applied and its physical location of occurrence. In contrast,
SLIDER (automatically) injects defects at the layout level and
extracts a netlist of the affected area.
Despite the similarity of SLIDER to the framework in [29],
SLIDER, besides having an enhanced scalability, further distinguishes itself from the work in [29] in two other important
aspects. First, the framework in [29] models defects as circuitlevel netlist changes. Compared to logic-level simulation, this
has tremendously improved accuracy but it still does not adequately reflect the physical changes in the layout caused by the
defect. For example, if an open occurs in a long interconnect
connecting node A to node B, the physical location of the open
along the interconnect cannot be captured by simply adding
a large resistance between the two nodes. SLIDER performs
defect injection at the layout level, followed by automatic circuit netlist extraction at the defect injection site, and therefore
is more accurate. Second, the work in [29] lacks flexibility in
that there is no defect generation capability. Moreover, each
available defect is tailored to the specific 4-bit ALU circuit
considered. This means the available defects cannot be ported
to another design without significant effort. In contrast, both
defect generation and injection are completely automated in
SLIDER and thus can be readily applied to different designs.
D. Framework Usage
The SLIDER framework enables fast and accurate creation
of virtual failure data that are useful in a variety of settings.
One obvious use of virtual failure data is to benchmark different diagnosis techniques. This is possible since the type and
location of the defect is known and can be precisely controlled.
Checking the diagnosis outcome against the “correct answer”
allows us to quantify the accuracy of the diagnosis techniques
under evaluation. In fact, SLIDER can be used to evaluate any
test or yield-learning techniques that are based on failure data
analysis. For example, consider the evaluation of a systematic
defect identification methodology [6]–[10], [18], [19]. Using
SLIDER, random and systematic defects can be generated
in different proportions to evaluate how much “noise” the
methodology can tolerate and still provide the right answer.
Besides benchmarking, virtual failure data can actually aid
debugging of the test or yield-learning techniques since it can
provide examples where the technique fails. Such examples
expose incorrect assumptions and software bugs in the techniques. In fact, in the last experiment in Section IV, SLIDER
has exposed software bugs of the systematic defect identification methodology described in [7], and enabled further
refinement of the technique. This type of result can be very
difficult to achieve with real failure data that is limited in
quantity.
The virtual failure data can also be used to evaluate the
effectiveness of a fault model or test metric. Models and
metrics are used to direct automatic test pattern generation.
The resulting tests directly affect product quality (e.g., tests
that better catch defects lead to lower defect escape). Tests
generated using a given fault model can be simulated with
SLIDER to evaluate its effectiveness in defect detection. Tests
generated using a given test metric (e.g., physically aware Ndetect [35]) can be subject to the same analysis to evaluate
the effectiveness of the test metric.
A less obvious usage of virtual failure data is to improve diagnosis itself by further analyzing diagnosis candidates to infer
more information. For example, as mentioned above, diagnosis
typically reports several equally ranked candidates. SLIDER
can be used to generate virtual failure data that corresponds to
different defect-injection scenarios at the candidate locations
to resolve this ambiguity. Even when there is no ambiguity in
the diagnosis (i.e., only one candidate is reported with a “100%
match”), it is possible to use SLIDER to better characterize
the defect (e.g., identify the range of resistance of an open
or a bridge, and the actual physical location) that can help to
further characterize the defect, yielding additional information
for any follow-on analyses.
Last, virtual failure data can play another active role in
improving test or yield-learning techniques. Machine learning
[36] techniques can be applied to extract valuable information
from the virtual data. The lessons learned from the virtual
failure data can then be applied to actual failure data. It is
advantageous to use virtual failure data as the “training data,”
because its characteristics are known and can be precisely
controlled. For example, a classifier can be trained to separate
defects based on their types. There is absolute certainty in the
training-data labels (i.e., defect type) if virtual failure data are
used. Indeed, this concept was explored in a recent work in
[37]. In [37], a decision forest [38] was trained to separate
bridge and nonbridge defects using a mixture of virtual and
real failure data. The trained classifier was then applied to real
failure data whose defect types were known. The result of the
experiment was encouraging. Using the decision forest, the
work in [37] correctly classified all six failed ICs that were
deemed to be bridges by logic-level diagnosis software but
turned out to be nonbridges after PFA was performed.
The remainder of this paper is organized as follows.
Section II describes the details of the SLIDER framework.
Section III describes the various defect-generation modes
supported in SLIDER. Section IV follows with several experiments to demonstrate its accuracy, its scalability, and how
it can be used for diagnosis improvement and evaluation of
diagnosis and yield-learning methodologies. Finally, this paper
is summarized in Section V.
II. Simulation Framework
In this section, we describe the structure and flow of the
SLIDER framework.
A. Framework Structure
Fig. 2 shows the key components of SLIDER. It is composed of a defect generator (DG), an analog block generator
TAM AND BLANTON: SLIDER: SIMULATION OF LAYOUT-INJECTED DEFECTS FOR ELECTRICAL RESPONSES
921
Fig. 3. Block diagram that illustrates the top-level testbench created by
SLIDER for mixed-signal simulation.
Fig. 2.
Block diagram of the SLIDER framework.
(ABG), a digital block generator (DBG), a mixed-signal simulator (MSS), and a result parser (RP). The shaded blocks
are implemented in SLIDER. The MSS used in SLIDER is
Cadence AMS designer [11]; however, any other mixed-signal
simulator can be easily used instead.
B. Framework Flow
The inputs to the framework are the design database, a
description of the defects to be injected into the layout or a set
of options to direct the defect-generation process, and the test
patterns. The design database must contain both the layout
and the logic-level netlist. The defect description specifies
the geometry of one or more defects (i.e., coordinates, size,
polygon, layer, and others). If the defect description is not
specified, then options must be specified to direct the defectgeneration process. The different options for defect generation
will be described in detail in Section III. The supported defect
types include open, resistive via, signal bridge, supply bridge,
and various kinds of cell defects that include the “nasty
polysilicon defect” [39], transistor stuck-open, and transistor
stuck-closed. The test patterns mimic the tester stimulus that
would be applied to the circuit during production testing.
Using these inputs, SLIDER creates a testbench that can
be understood directly by the MSS to perform mixed-signal
simulation for the defects of interest, as illustrated in Fig. 3.
The flow starts by automatically partitioning the design into
its respective analog and digital blocks, which are created by
the ABG and DBG in Fig. 2, respectively. The analog block
contains all instances and nets that are required to produce accurate circuit-level simulation results for the defect sites, while
the digital block contains everything that remains. Defects are
then injected into the analog block at the layout level. The
analog block, the digital block, and their connections form
the design under test (DUT). To capture the DUT’s response,
a Verilog module, namely, the RP, is defined to sample the
output waveform at the frequency defined by the test clock.
Once the testbench is created, the MSS performs a transient
analysis for a time duration that is sufficient for applying all
test patterns. For example, if the test pattern count is 288 and
the test clock period is 10 ns, then the duration for transient
analysis is 2880 ns. During simulation, the MSS automatically
takes care of the cross-domain signals through the use of
connection modules: an analog-to-digital (digital-to-analog)
convertor for a signal that crosses from the analog (digital)
domain to the digital (analog) domain. A connection module
is easily specified in the Verilog-AMS language [40], and can
be configured on a per simulation basis to ensure accuracy in
the signal conversion.
This process is repeated automatically for each simulation
run, where a run consists of the injection of one or more
defects. It should be noted that because the design database can
be very large, copying the database is intentionally avoided.
Instead, the design database is modified in place to output the
testbench for MSS. Changes to the original design database
are immediately discarded afterward. The design is purged and
reopened for the next iteration.
C. Analog Block Generation
Fig. 4 illustrates the flow for analog block generation. In the
first step, for each defect, its layout location of injection is used
to determine the affected nets. A net is considered affected
when its corresponding layout polygons have a nonzero-area
overlap with the defect polygon. (If the defect affects a
standard cell, then all the nets connected to the cell are deemed
to be affected.) For each affected net, an influence cone is
identified that consists of all the nets that can influence or
control the voltage of the affected net, or whose voltage can be
rendered indeterminate by the affected net. Thus, the influence
cone typically consists of signals in the transitive fan-in and
fan-out of the affected net. Table I summarizes the types of
nets in the influence cone. Column 1 gives the net type while
column 2 provides a description. Fig. 5 illustrates the various
nets in an influence cone. In this example, an open (shown
as a shaded circle) is injected into net N1 . The transitive fanin of N1 for one logic stage is {N4 }. (Refer to column 3 of
Table I for other net-type examples.) The influence cone is
the union of all the nets and is therefore {N1 , N2 , N4 –N7 }
for d = 260 nm, i = 1, and j = 2. Parameters d, i, and j
are all specified by the user for each defect to be injected.
Influence-cone determination is repeated for each defect to be
injected in the circuit. The influence cones for all the defects
are consolidated into a single cone using a set union operation.
It should be emphasized that the influence cone is intentionally
defined to have a well-defined, conservative logical boundary
(i.e., a signal net or a standard cell is never split) to simplify
the design partitioning process.
It is clear that the parameters i, j, and d should be small
to achieve faster runtimes. However, if i and j are too small,
logical indeterminate values (X) can be propagated outside of
the analog block. In addition, if d is too small, some nets
in the neighborhood could be missed, which could lead to
inaccuracy. The rule of thumb is that i and j should be as small
as possible without causing one or more unknown values to
922
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 31, NO. 6, JUNE 2012
TABLE I
Various Nets That Can Be in the Influence Cone of a Net x
Net Type
Transitive fan-in (x, i)
Transitive fan-out (x, j)
Neighbors (x, d)
Side inputs (x)
Description
Fan-in logic cone of x for i logic stages
Fan-out logic cone of x for j logic stages
Nets within a physical distance d from the net x in the layout
Nets driving the side inputs of the cells driven by x
Example (See Fig. 5)
Fan-in (N1 , 1) = {N4 }
Fan-out (N1 , 1) = {N6 } Fan-out (N1 , 2) = {N6 , N7 }
Neighbor (N1 , 260 nm) = {N5 }
Side input (N1 ) = {N2 , N5 }
TABLE II
Layout-Level Modifications Supported and the Associated
Circuit-Level Modifications
Fig. 4.
Flow diagram that illustrates analog block creation.
Fig. 5. Illustration of the influence cone (dashed rectangle) for an open
defect.
propagate to the circuit primary outputs, while d should be
chosen to include all wires in adjacent routing tracks.
In the second step of Fig. 4, a new, layout-level subdesign is
created using all the layout polygons that belong to the nets in
the influence cone. The driving cells (e.g., G1 , G2 , G4 –G7 in
Fig. 5) and the receiver cells (e.g., G1 , G3 , G6 , G7 , and G14 in
Fig. 5) of these nets are also included in the subdesign. The resulting polygons in the subdesign define a bounding area (e.g.,
dashed rectangle in Fig. 5) that is used to clip the polygons that
belong to global nets, such as VDD and GND. The clipped
polygons are included in the subdesign. Input and output pins
are then added to form the I/O of the subdesign. Using the
same example in Fig. 5, the input pins are I1 –I5 , the output
pins are O1 and O2 , and the global signals are VDD and GND.
Note that the subdesign creation is an inexpensive operation
because the subdesign layout is very small or very sparse.
The third step of Fig. 4 modifies the subdesign layout to
inject the defect. Table II summarizes the layout modifications
made to the subdesign for various defect types. Column 1 of
Table II shows the defect type, while columns 2 and 3 show
the layout before and after the modification for each defect
type, respectively. Generating cell defects is not described in
Table II since they are generated using the same methods at
the standard-cell level.
In the fourth step of Fig. 4, a circuit-level netlist is extracted
from the “defective” (modified) subdesign layout using the
Cadence extraction engine [41]. Since the layout geometry has
been modified, the extraction engine would reflect all changes
in the form of modified connectivity and parasitic components
(e.g., coupling to adjacent lines, additional load, and others)
within the netlist.
The final step (step 5 of Fig. 4) patches the extracted circuitlevel netlist. This step adds flexibility for the user to modify the
netlist to more accurately represent the defect. For example,
in open defect injection, it might be desirable to change the
resistance of the open node to better model a partial open
instead of a complete open. The patching process for each
defect type is described in column 4 of Table II. The default
impedance is 1 G for an open or a resistive via, and 10 for
a signal or supply bridge. These values can be overridden to
better mimic the defect characteristics observed in the process.
In addition, the patching process is not limited to resistance
modification. Inductance and capacitance can also be easily
included as well. The patched circuit is directly used in the
mixed-signal simulation.
D. Digital Block Generation
During the formation of the analog block, the digital block is
formed in the background. In fact, it is the design database that
remains after the polygons for the analog block are removed.
Since nets and cells are removed from the original design to
form the analog block, there is a connectivity change in the
database. Therefore, new input and output ports are used to
reflect this change. Using the same example in Fig. 5, the
output ports are I1 –I5 , and the input ports are O1 and O2 .
After the digital and analog blocks are created, the DUT
includes instantiations of these two blocks and connects them
together accordingly using analog-to-digital and digital-toanalog converters as shown in Fig. 3.
TAM AND BLANTON: SLIDER: SIMULATION OF LAYOUT-INJECTED DEFECTS FOR ELECTRICAL RESPONSES
Fig. 6.
Plot of the probability density function for defect size distribution.
III. Defect Generation
As mentioned in Section II, there are two ways to inject
defects using SLIDER: a user-defined defect description and
automatic defect generation by the DG. In the first method,
users specify a complete XML description [42] for each defect
to be injected. The XML format is used because it is a
well-defined standard with open-source libraries that support
parsing and generation. The advantage of this method is that
users have precise control of layout-level defect injection.
The disadvantage is that users need to specify very low-level
details (e.g., the polygon vertices that describe the defect)
for every defect to be injected. This low-level description
requires knowledge of the design layout and can therefore be
tedious or inconvenient to create. The second option, namely,
automatic defect generation, allows users to specify a highlevel description of the desired characteristics of the resulting
defect population. Based on this input, the DG automatically
generates a defect population of size n (specified by the user)
that satisfies the user-defined characteristics. The DG currently
supports three modes of defect population creation, random,
systematic, and diagnostic, which are described next.
A. Random
In this mode, a defect population that follows a given
DDSD is generated. DDSD and critical area [43] are the
essential components of many widely used yield models (e.g.,
Poisson yield model [44], negative binomial yield model
[44], and others) for predicting IC yield loss due to random
contaminations. In these models, a random contaminant of
missing or extra material is often assumed to be circular in
nature (but sometimes rectangular) with radius r. A DDSD is
typically modeled using the inverse power law as follows:
⎧
2(pi − 1)r
⎪
⎪
,
0 ≤ r < X0i
⎨ D0i
2
(pi + 1)X0i
Di (r) = D0i ×fi (r) =
p −1
2(pi − 1)X0ii
⎪
⎪
⎩ D0i
, X0i ≤ r.
(pi + 1)r pi
In these equations, D0i is the defect density and fi (r) is the
probability density function for the defect size. The parameter
r is the defect radius. The subscript i denotes the layer of
interest. Both X0i and pi are models parameters (specified by
the user), where X0i divides r into two regions to better fit
any empirical data, and pi is the power index of r.
Fig. 6 shows an example plot of the probability density function fi (r). Clearly, this is not a commonly used distribution,
such as the normal distribution. Thus, no standard statistical
923
package provides a random-number generator that follows the
DDSD. Therefore, the method described in [45] is used to
sample from the DDSD and is briefly described here. Let Fi
(r) denotes the cumulative distribution function (CDF) of the
DDSD. Let u be a random number that is sampled uniformly
in the range [0, 1]. It can be shown that r = Fi−1 (u) follows the
distribution that has the CDF Fi (r), where Fi−1 is the inverse
of Fi . (A short proof is given in the Appendix.) Since fi (r) is
known, Fi and Fi−1 can be analytically calculated as follows:
⎧
(pi − 1)r 2
⎪
⎪
r
,
0 ≤ r < X0i
⎨
2
(pi + 1)X0i
fi (r)dr =
Fi (r) =
p −1
⎪
2X0ii
−∞
⎪
⎩1 −
, X0i ≤ r
(pi + 1)r pi −1
⎧
2
⎪
(p + 1)X0i
u
⎪
⎨ i
,
0 ≤ u < Fi (X0i )
(pi − 1)
−1
Fi (u) =
p −1
2X0ii
⎪
⎪
(pi − 1) , Fi (X0i ) ≤ u ≤ 1.
⎩exp ln (pi + 1)(1
− u)
More often than not, defects with radii smaller than Rmin are
not going to cause any failures, regardless of their locations.
Therefore, it is desirable to generate defects that have a size
greater than or equal to Rmin . The brute-force approach is to
discard any defect whose size is smaller than Rmin . However,
a better approach is to sample u in the interval [Fi (Rmin ), 1]
instead of [0, 1]. This ensures that no defect size less than
Rmin will be generated. Rmin is another parameter that can be
specified by the user.
Besides defect size, the defect location must also be generated. Let (xmin,i , ymin,i ) and (xmax,i , ymax,i ) denote the coordinates of the lower left corner and the upper right corner,
respectively, of the bounding box of all polygons in the layer
i. The x and y coordinates of the center of a contaminant are
sampled uniformly from the bounding box defined by (xmin,i ,
xmax,i ) and (ymin,i , ymax,i ).
Despite the requirement that the defect size has to be greater
than Rmin , the generated defect at location (xi , yi ) may not
cause a failure because the defect may not reside within the
critical area of the circuit [43]. It would be inefficient to
perform a simulation when the defect is not going to cause
failures. Thus, a “sanity check” is performed before simulation. This check is defect-type dependent and uses the design
layout to determine if a given defect will possibly cause circuit
failure. Specifically, if the defect is an open, the check ensures
that the defect will split the polygon(s) it contacts into two or
more pieces. If the defect is a bridge, the check ensures that
the defect connects at least two polygons from different nets.
For resistive vias, the check ensures that the defect contacts
at least one via. Simulation is performed only for the defects
that have passed these checks. Despite these checks, it is still
possible that the injected defect creates a defective circuit that
is, however, not detectable by the simulated test patterns.
B. Systematic
In this mode, a defect population that contains some percentage of systematic defects is generated. Systematic defects
arise due to design–process interactions, i.e., certain layout
features are more sensitive than expected to the manufacturing
924
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 31, NO. 6, JUNE 2012
users can specify the minimum number of geometries that
must be involved in the pattern, the minimum or maximum
number of times a pattern must occur in the layout, and so
on. Similar to the random-defect mode, SLIDER performs
the same “sanity checks” to ensure the systematic defects
generated have an increased likelihood to lead to failure.
C. Diagnostic
Fig. 7. Systematic defects that cause (a) a bridge and (b) an open. Courtesy
of [46].
Fig. 8.
Illustration of systematic defect injection.
process and are therefore more susceptible to failures. This
situation can lead to an increase in the likelihood of failure
in all locations with similar, sensitive layout features. Fig. 7
shows two examples of a systematic defect [46] that cause a
bridge and an open, respectively.
Systematic defects are often caused by pattern distortion
resulting from the use of subwavelength lithography [47],
but can also be due to other issues that include, e.g., metal
delamination that is induced by CMP [3]. Systematic defects
are injected at locations that have similar layout patterns.
Specifically, SLIDER uses the commercial tool Calibre pattern
matching [48] to identify similar patterns in the layout. Similarity here is defined by a set of constraints on the geometries
of the user-provided pattern. An example of a constraint
would be the via-to-via distance for the pattern in Fig. 8
must be between 0.3 μm and 0.5 μm. (The interested reader is
referred to [48] for more details on how to use constraints to
define pattern similarity.) Defects are automatically injected at
predefined locations of the matching geometry, thus creating a
population of systematic defects since all the generated defects
share a similar layout pattern. The process used by SLIDER to
create a systematic defect is illustrated in Fig. 8. The pattern
to be matched (shown on the right in Fig. 8) is searched for
within the layout. The areas identified by a square are the
matched locations. The shaded marker indicates the defectinjection location (in this case an open). In this example, a
population consisting of three open defects is generated.
Users can either provide the pattern directly or let SLIDER
randomly generate a pattern from the design layout. If the
latter is used, users must also provide the dimension and layer
of the pattern to be generated. SLIDER also provides options
for users to control the complexity of the pattern. For example,
As mentioned in Section I-A, SLIDER can be used to
explore different defect-injection scenarios for a particular net
in order to improve diagnosis resolution or defect localization.
In this mode, the user specifies a net name and the defect
type. The DG will automatically retrieve the layout geometries
associated with this net and create a defect of the specified
type in each of these geometries. For example, suppose net
N1 is routed using ten polygons from metal-1 to metal-3, and
the defect type specified is an open, then an open defect will
be injected at the center of each polygon associated with N1
in each layer. Similarly, when the defect type is a resistive
via, SLIDER will retrieve all the vias associated with the net.
Resistive vias are then injected at these via locations one at
a time to create a population. If the defect type is a bridge,
each layout geometry associated with this net is expanded by
a user-defined distance to find neighboring geometries that are
not associated with this net. For each geometry associated with
the net, bridge defects are injected one at a time between
geometry of the net currently under consideration and each
neighboring geometry. Last, if the defect type is a cell defect
(such as transistor stuck closed), the cells connected to the
given net are retrieved. For each cell retrieved, each transistor
in the cell is assumed to be faulty one at a time to create a
cell defect population.
IV. Experiments
In this section, experiment results are presented that demonstrate how SLIDER can be used in various scenarios. Specifically, four sets of experiments are presented: 1) realistic
failure behavior; 2) scalability; 3) diagnosis improvement; and
4) diagnosis and yield-learning evaluation.
A. Realistic Failure Behavior
To demonstrate the realistic failure behavior that can be
obtained using SLIDER, 12 benchmark circuits [49], [50],
varying in size and complexity, are placed and routed using
Cadence First Encounter [51]. The technology used is the
TSMC 180-nm CMOS process from the publicly accessible
MOSIS webpage [52]. In this experiment, we are interested in
creating failing tester responses that mimic slow-speed scan
test. Thus, no simulation is performed to create delay-test
responses. Test sets are created for each benchmark using
Atalanta [53], each achieving 100% stuck-at fault coverage.
The critical-path delay of all benchmarks does not exceed
3.6 ns. To balance simulation time and accurately resemble a
stuck-at, scan-test environment, the applied test clock period
is chosen to be 10 ns. The test patterns are applied to each
benchmark one-by-one in a single simulation, i.e., all tests for
a single defective circuit are simulated in a single run.
TAM AND BLANTON: SLIDER: SIMULATION OF LAYOUT-INJECTED DEFECTS FOR ELECTRICAL RESPONSES
925
TABLE III
Statistics of Simulated Tester Responses That Exhibit Location Dependency for Opens, Resistive Vias, and Signal Bridges
Benchmark
Name
C432
C499
C880
C1355
S208
S382
S386
S444
S510
S641
S713
S953
N diff
45
83
92
181
11
26
70
34
73
65
49
173
Opens
N modified
Dfailed
336
1274
441
1586
646
2262
960
3361
140
509
263
931
311
1112
319
1085
399
1436
464
1781
500
1756
727
2636
Dinjected
1277
1602
2262
3372
514
932
1115
1093
1439
1783
1829
2643
N diff
13
49
58
81
5
14
29
15
35
33
29
101
Resistive Vias
N modified
Dfailed
83
646
273
906
301
1096
455
1660
58
213
132
443
165
564
133
481
179
669
212
766
234
774
365
1280
For every net in each benchmark, defects of the same type
are injected at various locations along the same segment of the
same net. Fig. 9 illustrates this process. As shown in Fig. 9, net
f has five different segments (labeled B1 –B5 ). In this example,
two defects, denoted by circular dots, are injected one at a time
at two different locations along B3 in two different simulations.
Defects are injected into the remaining segments B1 , B2 ,
B4 , and B5 in a similar manner. This process is repeated
for all the nets for each benchmark. If SLIDER accurately
captures the physical location of the defect in the extracted
netlist, the simulated tester responses of the circuit should be
different for some nets (e.g., a long net). This experiment is
repeated for three defect types that include an interconnect
open, a resistive via, and a signal bridge. Results of these
simulations are summarized in Table III. (Supply bridges and
standard-cell defects are not used in this experiment because
they do not exhibit location dependency at the interconnect
level.) The number of net segments exhibiting a locationdependent response (N diff ) is recorded along with the number
of net segments modified (N modified ). Similarly, the number of
defective circuits that fails at least one test (Dfailed ) is recorded
along with the number of defects injected (Dinjected ). This latter
data is included to show that simulation resources are rarely
wasted on undetected defects.
From these results, it is clear that even for a simple circuit
like S208, which consists of only 96 gates, the framework
is able to produce location-dependent tester responses for
all three defect categories. The ability to capture locationdependent behavior is very important especially for opens
since it is well known that the exact location of an open can
have a measurable impact on the tester response [21], [22]. For
each defect category in Table III, the ratio of the number of
nets with a different tester response (N diff ) to the total number
of nets modified (N net ) is less than the ideal value of one
because the tester response is binary in nature and does not
necessarily reflect the differences in the analog output signals
caused by location changes of the injected defect. In addition,
Table III shows that the responses generated for bridges at
different locations are very likely to be different (82% on average), when compared to the other two defect types. This result
is expected since a bridge injected at different locations along
Dinjected
648
913
1097
1664
215
444
566
484
672
767
808
1284
N diff
109
79
166
190
21
25
67
47
114
68
87
237
Bridges
N modified
Dfailed
116
435
84
407
388
1084
238
1053
28
150
36
226
72
329
63
328
121
494
76
358
101
456
251
958
Dinjected
450
428
1163
1161
158
251
340
350
512
387
512
1006
a given segment is likely to bridge to different neighboring
nets, which generally results in different tester responses.
B. Scalability
To demonstrate the scalability of SLIDER, another 11
benchmark circuits of much larger sizes than those used in
the previous experiment (including two ITC99 benchmarks
B14 and B18 [54]) are placed and routed using Cadence
Encounter using the same CMOS process. For each circuit,
100 random defects taken from Table II are injected and simulated. The results are averaged and summarized in Table IV.
Column 1 shows the circuit name, while columns 2–4 provide
the transitive fan-in and fan-out levels, and physical distance
parameter d, respectively. The total number of gates for each
circuit is given in column 5, while column 6 gives the number
of gates in the influence cone that is an approximate size
measure of the analog block. The number of test patterns, the
corresponding fault coverage, and the test-clock period (in ns)
are provided in columns 7–9, respectively. These test patterns
are generated using a commercial ATPG tool. The simulation
time and the total runtime are summarized in columns 10 and
11, respectively. Comparing columns 10 and 11, it is clear that
the additional computation time for circuit partitioning incurs
negligible overhead to the total runtime, which is dominated by
the simulation time. It should be emphasized that, more often
than not, only a subset of test patterns has to be simulated.
For example, since it is likely that interest lies on how
the injected defect behaves for failing test patterns, one can
identify those patterns that sensitize the defective net (using
inexpensive logic simulation) and then employ SLIDER for
only those patterns. The number of test patterns that sensitize
the affected net is typically a very small percentage of the
total number of test patterns. Therefore, the simulation time
and the total runtime are normalized by the pattern count
and are shown in columns 12 and 13, respectively. Note
that the simulation time is not further normalized by the
test-clock period since the circuit has to be fully simulated
for the entire test-clock period to obtain meaningful results.
Column 13 clearly shows that runtime increases slightly even
though the circuit size has increased substantially. The runtime
increase is due to the digital block getting larger as the
926
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 31, NO. 6, JUNE 2012
TABLE IV
Summary of the Runtime Experiment Results to Demonstrate the Scalability of the SLIDER Framework
Circuit
Name
C432
C1355
C3540
C7552
S13207
S15850
S35932
S38417
S38584
B14
B18
No. of
Fan-In
Levels
1
1
1
1
1
1
1
1
1
1
1
No. of
Fan-Out
Levels
2
2
2
2
2
2
2
2
2
1
1
Neighbor
Distance
(μm)
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.4
0.4
No. of
Gates
172
582
1743
3813
8483
9940
16353
22609
20193
10033
116840
No. of
Gates
in Cone
33.62
24.13
37.15
27.92
22.69
24.60
42.16
34.42
48.09
55.87
63.94
No. of
Patterns
49
101
139
102
270
135
23
108
151
746
1212
circuit scales. The neighborhood distance and fan-out level
is reduced for B14 and B18 to ensure a small analog block
size (i.e., small number of gates in the influence logic cone).
Despite these changes, there is no loss of accuracy because
the outputs have been verified to contain no unknown values
(i.e., Xs) and all the nets in the adjacent routing tracks are still
included.
Fault
Coverage
(%)
98.93
100.00
100.00
99.97
99.98
100.00
100.00
100.00
99.97
99.98
99.90
Test
Period
(ns)
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
10.00
18.00
Simulation
Time (s)
48.21
45.15
90.43
65.48
100.97
118.96
113.62
123.58
162.69
362.74
1750.39
Total
Time
(s)
53.55
50.65
96.29
72.11
109.07
129.74
128.09
135.93
175.15
372.16
1803.42
Simulation
Time Per
Pattern (s)
0.98
0.45
0.65
0.64
0.37
0.88
4.94
1.14
1.08
0.49
1.44
Total
Time Per
Pattern (s)
1.09
0.50
0.69
0.71
0.40
0.96
5.57
1.26
1.16
0.50
1.49
TABLE V
Characteristics of Diagnosis Candidates for Example One
Candidate
Name
N7671
FIXED− FAN-IN− N7671− 0
Fault
Matching Score
100%
100%
Fault
Type
sa01
sa01
Response
Match?
No
Yes
C. Diagnosis
To demonstrate the utility of SLIDER in the diagnosis flow,
an experiment is performed to show how diagnostic resolution
and localization can be improved. In this experiment, virtual
failures from the scalability experiment are diagnosed using
a commercial diagnostic tool. For diagnostic outcomes that
are ambiguous (nonideal resolution, long candidate nets, etc.),
SLIDER is then used to reduce the level of ambiguity. Specifically, using the diagnostic defect generation mode of SLIDER,
defects are injected at various sites along the candidate net(s)
and are simulated to identify a defect type and location that
produces a simulation response that best matches the observed
tester response. It should be emphasized that this is a blind
experiment in that the identified defect is checked against the
injected defect at the end of the experiment. Two examples are
given to illustrate improvement in: 1) diagnostic resolution,
and 2) defect localization.
The first example uses a diagnosis result for the benchmark
circuit C7552. The commercial tool reports two fault candidates whose characteristics are summarized in Table V. Both
candidates have the type “sa01” (i.e., the candidate exhibits
stuck-at-0 behavior for some failing test patterns and stuckat-1 behavior for other failing test patterns). Both candidates
match all the failing patterns and therefore they have a 100%
matching score, which is calculated using the formula [55],
[56] as follows:
tfsf
Fault matching score =
tfsf + tfsp + tpsf
where tfsf (tester-fail, simulation-fail) is the number of times
when both the tester and the simulation produce matching
failing responses, and tfsp (tpsf ) is the number of test patterns
where the tester failed (passed) but the simulator passed
(failed).
Fig. 9.
net f .
Illustration of defect injection along the same segment (B3 ) in the
Since a sa01 tester response is a strong indication of an
open, an open defect is injected at various locations along the
nets that correspond to the two diagnosis candidates. However,
only defects injected into the second candidate produce a
mixed-signal simulation response that exactly matches all the
passing and failing test patterns. Therefore, the candidate
N7671 is eliminated, leaving only the second candidate, which,
upon verification, is the actual site used to create the original
(virtual) test response. This small experiment demonstrates
that diagnostic resolution can be improved using SLIDER.
The second example uses a diagnosis result for the benchmark circuit C3540. In this example, we demonstrate defect
localization improvement. For this run, the commercial diagnosis tool returns a single candidate (N317) of type sa01 with
a 100% matching score. This net has four fan-outs, G1 –G4 , as
shown in Fig. 10. (Fig. 10 is drawn based on the topology of
the actual layout but is not drawn to scale.) Again, because of
the sa01 tester response, open defects are injected at various locations (numbered circles) along the layout-level net that corresponds to the candidate. A simulation response match occurs
only when the defect is injected at location 1 shown in Fig. 10.
TAM AND BLANTON: SLIDER: SIMULATION OF LAYOUT-INJECTED DEFECTS FOR ELECTRICAL RESPONSES
927
TABLE VI
Quantification of Diagnosis Accuracy
Benchmark
Name
C432
C499
C880
C1355
S208
S382
Fig. 10. Injection of an open defect (circle) in various locations along the
diagnosis candidate (N317) for better defect localization.
In other words, the defect has been localized to the region
defined by the dashed bounding box (a wire-length localization
improvement of 53%) that has been verified to be correct by
examining the analog block used to create the test response.
D. Methodology Evaluation
In this experiment, virtual failure data is created using
SLIDER for investigating: 1) the accuracy of a new diagnosis
approach, and 2) the effectiveness of a systematic defect
identification methodology.
In the first experiment, the physically aware diagnosis
methodology described in [57] and [58] is applied to the
population of failures described in Section IV-A. In [57] and
[58], both passing and failing tester patterns are analyzed at the
layout level to identify potential defect site(s) and conditions
for defect or failure activation. At the layout level, nets that
are physically close to the defect site (i.e., neighbors) that
are found to influence defect activation are identified. The
neighbors and the logic values required for defect activation
are mapped to a set of one or more fault-tuple products [34],
which are then disjunctively combined into a macrofault that
is an expression that represents the logical conditions that
must be satisfied for a site to become faulty. For example,
the macrofault (n1 , 0, i)c (n2 , 0, i)c (sx , 1, i)e + (n1 , 1, i)c (n2 ,
1, i)c (sx , 1, i)e indicates that site sx becomes stuck-at-1 when
n1 = n2 = 0 or n1 = n2 = 1. (More details of fault-tuple
macrofaults can be found in [34].) The resulting macrofault
is then fault simulated using all test patterns applied to the
failing chip. Only those macrofaults that produce a simulation
response that exactly matches the tester response are reported
as an outcome of diagnosis.
Table VI reports the accuracy of physically aware diagnosis.
A diagnosis outcome is considered correct if it has at least
one macrofault that contains the injected defect location. From
Table VI, we can see that the diagnosis methodology described
in [57] and [58] is of high accuracy and achieves 85% accuracy
on average. More interestingly, the failure data generated
using SLIDER has concrete examples where physically aware
diagnosis fails. Such examples facilitate debugging and enable
further improvement of the methodology.
In the second experiment, the systematic defect identification methodology LASIC (layout analysis for system defect
identification using clustering) [7] is evaluated. The input to
LASIC includes the volume-diagnosis results from a large
population of IC failures. Images of the layout regions that
Diagnosis
Accuracy
78.4%
89.8%
86.3%
79.3%
85.2%
77.6%
Benchmark
Name
S386
S444
S510
S641
S713
S953
Diagnosis
Accuracy
85.4%
94.9%
90.0%
76.0%
80.5%
96.7%
correspond to the failure locations isolated by diagnosis are
then extracted from the design layout. Clustering [59] of the
layout regions is performed to group similar images together.
The clusters formed are sorted according to their sizes in
decreasing order. The resulting order is called the rank of
the cluster. For example, the largest cluster has rank one,
the second largest cluster has rank two, and so on. The
dominant clusters (low-number ranks) are of particular interest
because they represent substantial commonalities in layout
features among the observed failures, and are likely to contain
systematic defects (if any). LASIC further suggests confirming
the existence of the systematic defects by simulation (e.g.,
lithography simulator [60]) or through PFA.
To investigate the effectiveness of LASIC, two failure
populations (one random and one systematic), each of size
100, are generated, using the random and systematic defectgeneration modes of SLIDER, respectively. Failures are then
randomly sampled from both populations to form a single
population that consists of both random and systematic defects.
This process is repeated to generate failure populations with
a different proportion of random and systematic defects. For
example, in the first iteration, the population has 100 systematic defects and 0 random defects, in the second iteration, the
population has 90 systematic defects and ten random defects,
in the third iteration, the population has 80 systematic defects
and 20 random defects, and so on. The goal is to evaluate
how much random “noise” can be tolerated by LASIC. The
experiment is performed using eight benchmark circuits. Four
types of defects are injected in these benchmarks: metal-2
bridge, metal-2 open, metal-3 bridge, and metal-3 open. The
results are averaged over the eight benchmarks. In addition,
to isolate the “noise” that can happen due to inaccuracies or
ambiguities in diagnosis, this experiment is performed under
two scenarios: ideal diagnosis and “real” diagnosis. Ideal
diagnosis here means that the outcome of diagnosis correctly
pinpoints the site of the injected defect with no ambiguity,
while real diagnosis can be inaccurate (i.e., diagnosis outcome
does not contain the injected-defect site) and/or low resolution
(i.e., additional sites are reported along with the actual site
of the injected defect). Since virtual failure data is used,
ideal diagnosis data can be easily obtained by just finding
the net(s) whose geometry overlaps with that of the defect.
Real diagnosis is obtained by applying a commercial diagnosis
tool to the virtual failure data, which results in inaccuracies
or ambiguities due to the inherent limitation of logic-level
diagnosis. This can be viewed as another source of “noise” in
the data. Comparing the two scenarios allows us to understand
928
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 31, NO. 6, JUNE 2012
through our website [the Advanced Chip Testing Laboratory
(ACTL), www.ece.cmu.edu/∼actl] enabling other researchers
to have access to large populations of virtual failures.
Appendix
Fig. 11.
LASIC evaluation for (a) ideal and (b) real diagnoses.
how the technique performs under the ideal and real scenarios.
The cluster that contains the underlying pattern of the injected
systematic defect is the correct cluster. For each scenario,
the averaged rank of the correct cluster is plotted against
the percentage of systematic defects injected. The data is
summarized in Fig. 11 for ideal and real diagnoses.
It is clear from Fig. 11 that the average rank of the
correct cluster decreases (which of course is desirable since
a lower rank means the layout geometry associated with the
injected defect resides in a larger cluster) as the percentage of
systematic defects increases for both ideal and real diagnoses.
It is also evident that the technique is effective for both ideal
and real diagnoses because the correct cluster is present in
the top 40 ranked clusters (top 3%) even when the population
only consists of 20% systematic defects. The performance of
LASIC is expectedly worse in the real-diagnosis scenario due
to diagnosis ambiguities and inaccuracies.
As a side note, the initial result was different from what is
shown in Fig. 11 when LASIC was first evaluated. Bugs were
discovered and LASIC was subsequently improved. Identifying the bug in LASIC would have been difficult without the
use of the virtual failure data generated by SLIDER.
V. Conclusion
This paper described SLIDER, a comprehensive framework for performing accurate and scalable defect simulation.
SLIDER made use of mature, commercial mixed-signal technology, making it capable of handling large designs and poised
to benefit from any future improvements in mixed-signal
simulation environments. SLIDER performed automatic defect
injection at the layout level, thus ensuring that the defect is represented as accurately as possible. SLIDER enabled automatic
fast and accurate virtual failure data creation that is useful in
a variety of settings that include diagnosis resolution improvement, defect localization, test metric or fault model evaluation,
diagnosis, and yield-learning methodology evaluation. Various
experiments have demonstrated the utility of SLIDER to
improve diagnosis. Specifically, diagnosis resolution improvement and defect localization were achieved through more
accurate defect simulation with SLIDER. Other experiments
have also demonstrated the utility of SLIDER to quantify
diagnosis accuracy, and to understand the performance and
limitation of a recently introduced systematic defect identification methodology [7]. We plan to “release” SLIDER data
To generate defect sizes that follow the distribution fi (r),
we have used this theorem. Let F be a CDF and let U ∼
Uniform(0, 1), then R = F −1 (U) follows the distribution that
has the CDF F .
Proof: We first note that all CDFs have a range of [0, 1]
and that, when U ∼ Uniform(0, 1), Pr (U ≤ u) = u, for u in
[0, 1]. Now, by definition, the CDF for R is Pr (R ≤ r) = Pr
(F −1 (U) ≤ r) = Pr(U ≤ F (r)) = F (r).
References
[1] W. C. Tam and R. D. Blanton, “SLIDER: A fast and accurate defect
simulation framework,” in Proc. VLSI Test Symp., 2011, pp. 172–177.
[2] D. Z. Pan, S. Renwick, V. Singh, and J. Huckabay, “Nanolithography
and CAD challenges for 32 nm/22 nm and beyond,” in Proc. Int. Conf.
Comput.-Aided Des., Nov. 2008, p. 7.
[3] P. Leduc, M. Savoye, S. Maitrejean, D. Scevola, V. Jousseaume, and
G. Passemard, “Understanding CMP-induced delamination in ultra lowk/Cu integration,” in Proc. Interconnect Technol. Conf., 2005, pp. 209–
211.
[4] J. H. Yeh and A. Park, “Novel technique to identify systematic and
random defects during 65 nm and 45 nm process development for faster
yield learning,” in Proc. Adv. Semicond. Manuf. Conf. Workshop, 2007,
pp. 54–57.
[5] B. Kruseman, A. Majhi, C. Hora, S. Eichenberger, and J. Meirlevede,
“Systematic defects in deep sub-micron technologies,” in Proc. Int. Test
Conf., 2004, pp. 290–299.
[6] R. Turakhia, M. Ward, S. K. Goel, and B. Benware, “Bridging DFM
analysis and volume diagnostics for yield learning: A case study,” in
Proc. VLSI Test Symp., May 2009, pp. 167–172.
[7] W. C. Tam, O. Poku, and R. D. Blanton, “Systematic defect identification
through layout snippet clustering,” in Proc. Int. Test Conf., 2010, p. 13.2.
[8] L. M. Huisman, M. Kassab, and L. Pastel, “Data mining integrated
circuit fails with fail commonalities,” in Proc. Int. Test Conf., 2004, pp.
661–668.
[9] M. Sharma, C. Schuermyer, and B. Benware, “Determination of
dominant-yield-loss mechanism with volume diagnosis,” IEEE Des. Test
Comput., vol. 27, no. 3, pp. 54–61, May–Jun. 2010.
[10] R. Desineni, L. Pastel, M. Kassab, M. F. Fayaz, and J. Lee, “Identifying
design systematics using learning based diagnostic analysis,” in Proc.
Adv. Semicond. Manuf. Conf., 2010, pp. 317–321.
[11] H. Tang, M. Sharma, J. Rajski, M. Keim, and B. Benware, “Analyzing
volume diagnosis results with statistical learning for yield improvement,”
in Proc. Eur. Test Symp., 2007, pp. 145–150.
[12] M. Keim, N. Tamarapalli, H. Tang, M. Sharma, J. Rajski, C. Schuermyer,
and B. Benware, “A rapid yield learning flow based on production
integrated layout-aware diagnosis,” in Proc. Int. Test Conf., 2006, pp.
1–10.
[13] W. C. Tam and R. D. Blanton, “To DFM or not to DFM?” in Proc. Des.
Automat. Conf., Jun. 2011, pp. 65–70.
[14] R. Kapur, M. Kapur, and M. Kapur, “Systemic diagnostics for increasing
wafer yield,” U.S. Patent 20110040528, 2011.
[15] J. E. Nelson, W. Maly, and R. D. Blanton, “Diagnosis-enhanced extraction of defect density and size distributions from digital logic ICs,” in
Proc. SRC TECHCON, 2007.
[16] X. Yu and R. D. Blanton, “Estimating defect-type distributions through
volume diagnosis and defect behavior attribution,” in Proc. Int. Test
Conf., 2010, pp. 1–10.
[17] X. Yu, Y.-T. Lin, W.-C. Tam, O. Poku, and R. D. Blanton, “Controlling
DPPM through volume diagnosis,” in Proc. VLSI Test Symp., May 2009,
pp. 134–139.
[18] J. Jahangiri and D. Abercrombie, “Value-added defect testing techniques,” IEEE Des. Test Comput., vol. 22, no. 3, pp. 224–231, May–Jun.
2005.
TAM AND BLANTON: SLIDER: SIMULATION OF LAYOUT-INJECTED DEFECTS FOR ELECTRICAL RESPONSES
[19] M. Sharma, B. Benware, L. Ling, D. Abercrombie, L. Lee, M. Keim,
H. Tang, W.-T. Cheng, T.-P. Tai, Y.-J. Chang, R. Lin, and A. Man,
“Efficiently performing yield enhancements by identifying dominant
physical root cause from test fail data,” in Proc. Int. Test Conf., Oct.
2008, pp. 1–9.
[20] S. Eichenberger, J. Geuzebroek, C. Hora, B. Kruseman, and A. Majhi,
“Towards a world without test escapes: The use of volume diagnosis to
improve test quality,” in Proc. Int. Test Conf., Oct. 2008, pp. 1–10.
[21] R. Rodriguez-Montanes, D. Arumi, J. Figueras, S. Einchenberger, C.
Hora, B. Kruseman, M. Lousberg, and A. K. Majhi, “Diagnosis of full
open defects in interconnecting lines,” in Proc. VLSI Test Symp., May
2007, pp. 158–166.
[22] Y. Sato, L. Yamazaki, H. Yamanaka, T. Ikeda, and M. Takakura, “A
persistent diagnostic technique for unstable defects,” in Proc. Int. Test
Conf., 2002, pp. 242–249.
[23] D. B. Lavo, T. Larrabee, and B. Chess, “Beyond the byzantine generals:
Unexpected behavior and bridging fault diagnosis,” in Proc. Int. Test
Conf., 1996, pp. 611–619.
[24] P. Ke, H. Yu, P. Mallick, W.-T. Cheng, and M. Tehranipoor, “Full-circuit
SPICE simulation based validation of dynamic delay estimation,” in
Proc. Eur. Test Symp., May 2010, pp. 101–106.
[25] J. Khare and W. Maly, “Rapid failure analysis using contaminationdefect-fault (CDF) simulation,” IEEE Trans. Semicond. Manuf., vol. 9,
no. 4, pp. 518–526, Nov. 1996.
[26] J. Khare and W. Maly, “Inductive contamination analysis (ICA) with
SRAM application,” in Proc. Int. Test Conf., 1995, pp. 552–560.
[27] J. Khare, W. Maly, and N. Tiday, “Fault characterization of standard cell
libraries using inductive contamination analysis (ICA),” in Proc. VLSI
Test Symp., 1996, pp. 405–413.
[28] A. Jee and F. J. Ferguson, “Carafe: An inductive fault analysis tool for
CMOS VLSI circuits,” in Proc. VLSI Test Symp., 1993, pp. 92–98.
[29] T. Vogels, T. Zanon, R. Desineni, R. D. Blanton, W. Maly, J. G. Brown,
J. E. Nelson, Y. Fei, X. Huang, P. Gopalakrishnan, M. Mishra, V. Rovner,
and S. Tiwary, “Benchmarking diagnosis algorithms with a diverse set
of IC deformations,” in Proc. Int. Test Conf., 2004, pp. 508–517.
[30] P. Banerjee and J. A. Abraham, “A multivalued algebra for modeling
physical failures in MOS VLSI circuits,” IEEE Trans. Comput.-Aided
Des. Integr. Circuits Syst., vol. 4, no. 3, pp. 312–321, Jul. 1985.
[31] W. Meyer and R. Camposano, “Fast hierarchical multi-level fault simulation of sequential circuits with switch-level accuracy,” in Proc. Des.
Automat. Conf., 1993, pp. 515–519.
[32] M. B. Santos and J. P. Teixeira, “Defect-oriented mixed-level fault
simulation of digital systems-on-a-chip using HDL,” in Proc. Des.,
Automat. Test Eur., 1999, pp. 549–553.
[33] X. Yu and R. D. Blanton, “Multiple defect diagnosis using no assumptions on failing pattern characteristics,” in Proc. Des. Automat. Conf.,
2008, pp. 361–366.
[34] R. D. Blanton, K. N. Dwarakanath, and R. Desineni, “Defect modeling
using fault tuples,” IEEE Trans. Comput.-Aided Des. Integr. Circuits
Syst., vol. 25, no. 11, pp. 2450–2464, Nov. 2006.
[35] Y.-T. Lin, O. Poku, N. K. Bhatti, and R. D. Blanton, “Physically-aware
N-detect test pattern selection,” in Proc. Des., Automat. Test Eur., Mar.
2008, pp. 634–639.
[36] C. M. Bishop, Pattern Recognition and Machine Learning, 1 ed. New
York: Springer-Verlag, 2006.
[37] J. E. Nelson, W. C. Tam, and R. D. Blanton, “Automatic classification
of bridge defects,” in Proc. Int. Test Conf., 2010, p. 10.3.
[38] L. Rokach and O. Z. Maimon, Data Mining with Decision Trees: Theroy
and Applications. Singapore: World Scientific, 2008.
[39] R. D. Blanton, J.T. Chen, R. Desineni, K. N. Dwarakanath, W. Maly,
and T. J. Vogels, “Fault tuples in diagnosis of deep-submicron circuits,”
in Proc. Int. Test Conf., 2002, pp. 233–241.
[40] K. Kundert and O. Zinke, The Designer’s Guide to Verilog-AMS, 1 ed.
New York: Springer, 2004.
[41] The Diva Reference Manual, Cadence Design Systems, Inc., San Jose,
CA, 2009.
[42] W3C. Extensible Markup Language (XML) 1.0 (Fifth Edition). (Nov.
2008) [Online]. Available: http://www.w3.org/TR/2008/REC-xml20081126
[43] W. Maly and J. Deszczka, “Yield estimation model for VLSI artwork
evaluation,” Electron. Lett., vol. 19, no. 6, pp. 226–227, 1983.
929
[44] C. H. Stapper, F. M. Armstrong, and K. Saji, “Integrated circuit yield
statistics,” Proc. IEEE, vol. 71, no. 4, pp. 453–470, Apr. 1983.
[45] L. Wasserman, All of Statistics: A Concise Course in Statistical Inference, 1 ed. New York: Springer, 2004.
[46] D. Abercrombie. (Aug. 2009). DFM for Non-PhD’s-Part 3: Real Life Examples [Online]. Available: http://blogs.mentor.com/david− abercrombie/
blog/2009/08/20/dfm-for-non-phds-part-3-real-life-examples
[47] A. B. Kahng and Y. C. Pati, “Subwavelength lithography and its potential
impact on design and EDA,” in Proc. Des. Automat. Conf., 1999, pp.
799–804.
[48] The Calibre Pattern Matching User Manual, 1.0 ed. Mentor Graphics
Corporation, Wilsonville, OR, 2010.
[49] F. Brglez and H. Fujiwara, “A neutral netlist of 10 combinational
benchmark circuits and a target translator in fortran,” in Proc. Int. Symp.
Circuits Syst., 1985, pp. 1929–1934.
[50] F. Brglez, D. Bryan, and K. Kozminski, “Combinational profiles of
sequential benchmark circuits,” in Proc. Int. Symp. Circuits Syst., 1989,
pp. 1929–1934.
[51] The Cadence Encounter Reference Manual, Cadence Design Systems,
Inc., San Jose, CA, 2009.
[52] MOSIS. The MOSIS Service [Online]. Available: www.mosis.com
[53] H. K. Lee and D. S. Ha, “Atalanta: An efficient ATPG for combinational
circuits,” Dept. Electric. Eng., Virginia Polytechnic Inst. State Univ.,
Blacksburg, VA, Tech. Rep. 93-12, 1993.
[54] F. Corno, M. S. Reorda, and G. Squillero, “RT-level ITC’99 benchmarks
and first ATPG results,” IEEE Des. Test Comput., vol. 17, no. 3, pp. 44–
53, Jul.–Sep. 2000.
[55] The Tetramax User Guide, Synopsys, Inc., Mountain View, CA, 2009.
[56] The Cadence Encounter Test User Guide, Cadence Design Systems, Inc.,
San Jose, CA, 2009.
[57] R. Desineni, O. Poku, and R. D. Blanton, “A logic diagnosis methodology for improved localization and extraction of accurate defect behavior,” in Proc. Int. Test Conf., 2006, pp. 1–10.
[58] R. Desineni and R. D. Blanton, “Diagnosis of arbitrary defects using
neighborhood function extraction,” in Proc. VLSI Test Symp., 2005, pp.
366–373.
[59] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: A review,”
ACM Comput. Surveys, vol. 31, no. 3, pp. 264–323, 1999.
[60] The Optissimo User Manual, 6.0 ed., PDF Solutions, Inc., San Jose,
CA, 2001.
Wing Chiu (Jason) Tam (S’03) received the B.S.
degree from the National University of Singapore,
Singapore, in 2003, and the M.S. degree from
Carnegie Mellon University, Pittsburgh, PA, in 2008,
where he is currently pursuing the Ph.D. degree in
electrical and computer engineering.
From 2003 to 2006, he was with Advanced
Micro Devices, Singapore, and Agilent Technology, Singapore. His current research interests include computer-aided design for designfor-manufacturing, integrated circuit testing, defect
modeling, fault diagnosis, and yield modeling.
R. D. (Shawn) Blanton (S’93–M’95–SM’03–F’08)
received the B.S. degree in engineering from Calvin
College, Grand Rapids, MI, in 1987, the M.S. degree in electrical engineering from the University of
Arizona, Tucson, in 1989, and the Ph.D. degree in
computer science and engineering from the University of Michigan, Ann Arbor, in 1995.
He is currently a Professor of electrical and computer engineering at Carnegie Mellon University,
Pittsburgh, PA, where he is also the Director of
the Center for Silicon System Implementation, and
founder and leader of the Advanced Chip Testing Laboratory. His current
research interests include testing and diagnosis of integrated heterogeneous
systems.
Download