Document 10414692

advertisement
1278
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 34, NO. 8, AUGUST 2015
LASIC: Layout Analysis for Systematic IC-Defect
Identification Using Clustering
Wing Chiu (Jason) Tam, Student Member, IEEE, and Ronald D. (Shawn) Blanton, Fellow, IEEE
Abstract—Systematic defects within integrated circuits (ICs)
are a significant source of failures in nanoscale technologies.
Identification of systematic defects is therefore very important
for yield improvement. This paper discusses a diagnosis-driven
systematic defect identification methodology that we call layout
analysis for systematic IC-defect identification using clustering (LASIC). By clustering images of the layout locations that
correspond to diagnosed sites for a statistically large number
of IC failures, LASIC uncovers the common layout features. To
reduce computation time, only the dominant coefficients of a discrete cosine transform analysis of the layout images are used
for clustering. LASIC is applied to an industrial chip and it is
found to be effective. In addition, detailed simulations reveal that
LASIC is both accurate and effective.
Index Terms—Clustering, layout analysis, systematic defects,
test data mining, yield learning.
I. I NTRODUCTION
EFECTS caused by random contaminants were the main
yield detractor in integrated circuit (IC) manufacturing
before the nanoscale era [1]. However, as CMOS technology
continues to scale, the process complexity increases tremendously, which results in design-process interactions that are
difficult to predict and control. This, in turn, translates into
an increased likelihood of failure for certain layout features
that are sensitive to particular process corners. Unlike random
contaminations, the defects that result from design-process
interactions are systematic in nature, i.e., they can lead to
an increase in likelihood of failure in locations with similar layout geometries (note that it does not mean that they
must lead to a drastic yield loss). Since a product IC typically contains a diverse set of layout features, it is difficult
to use test structures to completely characterize the designprocess interactions because of their limited volume [1], [2].
In other words, conventional test structures, while still very
useful in characterizing the process and understanding defects,
have somewhat diminished in applicability [1], [2].
To address these issues, volume diagnosis is increasingly deployed to improve/supplement yield-learning [3]–[14].
D
Manuscript received May 29, 2014; revised November 10, 2014; accepted
January 16, 2015. Date of publication February 24, 2015; date of current
version July 24, 2015. This work was supported by the Semiconductor
Research Corporation under Contract 1644.001. This paper was presented at
the International Test Conference [3] in 2010. This paper was recommended
by Associate Editor L.-C. Wang.
At the time this research was conducted, the authors were with the
Advanced Test Chip Laboratory, Electrical and Computer Engineering
Department, Carnegie Mellon University, Pittsburgh, PA 15213-3890 USA
(e-mail: blanton@ece.cmu.edu).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCAD.2015.2406854
Volume diagnosis refers to the process of performing softwarebased diagnoses of a large amount of IC test fail data,
which is further analyzed for a variety of purposes. For
example, on-going diagnoses are used to identify systematic
defects [3]–[7] and derive design-feature failure rates [8], [9].
In [10] and [11], the effectiveness of design for manufacturability (DFM) is evaluated using volume diagnosis results.
Diagnosis is also used as a part of a yield-monitoring
vehicle for estimating defect density and size distributions (DDSDs) [12]. In [13]–[15], diagnosis is used to monitor
and control IC quality. Finally, in [16], volume-diagnosis
is a key part of a methodology for evaluating test metrics and fault models without performing conventional test
experiments.
Yield learning based on volume diagnosis has several advantages. First, since no manufacturing process is perfect, defects
inevitably occur. Therefore, test must be applied to the manufactured ICs to screen out failures to ensure that bad parts do
not escape to customers. The fail data generated by test can
be directly used in software-based diagnosis. In other words,
an abundant amount of fail data is continuously being generated. Thus, no additional effort is required to generate the
fail data, and intrusion into the fabrication and test process
is minimized. Second, unlike test structures, software-based
diagnosis consumes only CPU cycles and therefore does not
consume extra silicon real estate. The cost to perform volume diagnosis is therefore comparatively lower. Third, volume
diagnosis is performed on the actual product ICs, which contain the diverse geometries that may render conventional test
structures inadequate. Thus, using volume diagnosis can complement existing yield-learning techniques to help improve
yield even further.
This paper is a step in this direction. Specifically, this
paper attempts to identify systematic defects for yield improvement using volume diagnosis. By clustering layout images
of diagnosis-implicated regions, layout-feature commonalities (if any) that underlie the failures are identified. Features
present in large clusters can then be analyzed further to
confirm the existence of a systematic issue, or alternatively
physical failure analysis (PFA) can be performed on one
or more of the actual failing chips that have the failing
feature. The methodology developed in this paper is called
layout analysis for systematic IC-defect identification using
clustering (LASIC).
A. Prior Work
Because of the potential yield benefits that can be derived
from systematic defect identification, a tremendous amount
c 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
0278-0070 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution
to servers or lists, or reuse of any copyrighted component of this work in other works.
TAM AND BLANTON: LASIC
of research effort has focused on this area. This is evident in several recently published papers on this topic.
Turakhia et al. [4] analyze design layout to identify locations
that marginally pass the DFM constraints. The resulting layout
locations are used as starting points for identifying systematic issues. References [8] and [17] compute expected
failure rates for layout features that are assumed to be difficult to manufacture. This information is combined with
volume diagnosis data to identify outliers (i.e., systematic defects). Jahangiri and Abercrombie [18] used criticalarea analysis [19] to compute the expected failure rate for
each net. The presence of systematic issues is investigated
by comparing the observed failure rates with those predicted by critical-area analysis. Huisman et al. [5] applied
clustering techniques to test-response data collected from
a large number of failing ICs to identify common failure
signatures, which may indicate the presence of systematic
issues. References [20] and [21] performed extensive lithography simulation on the entire layout to identify hotspots.
Layout snippets1 containing these hotspots are extracted
and clustered with the goal of formulating DFM rules.
References [22] and [23] extracted layout features (such as
the number of vertices in a polygon, minimum-line width, and
spacing, etc.) of suspect defect locations identified by brightfield inspection. These locations are clustered based on the
features for characterizing each location. The resulting clusters are then used to eliminate suspect locations that are likely
not to result in a killer defect to better direct scanning electron
microscope (SEM) review.
B. Our Contribution
Undeniably, all of the aforementioned work has a certain degree of success in systematic defect identification, as
evidenced by their experiment results. Nonetheless, many
of the past approaches [4], [5], [8], [17], [18] fall short of
automatically extracting similar layout features that may be
a significant source of yield loss. In addition, the approach that
uses critical area [19] requires a full-chip analysis, which is
time-consuming to achieve. The approach in [5] used clustering but at a high level of abstraction by analyzing scan-based
test responses. Clustering failed ICs into groups based on test
responses provides a very useful starting point for identifying systematics that are location-based but this alone may not
pinpoint the layout features that are potentially problematic.
LASIC attempts to address the aforementioned limitations by clustering portions of layout believed to be
the failing location. It should be clear, however, that
the work yet-to-be-described complements the existing
approaches [4], [5], [8], [17], [18]. For example, suppose the
critical-area/DFM-based approaches of [4] and [18] are used
to identify nets with unexpectedly high failure rates. Layout
images of these nets can be (automatically) extracted and
clustered to identify commonalities. In addition, the layout
features extracted using LASIC can be used as the inputs
for [8] and [17]. In fact, LASIC can be applied independently
1 A layout “snippet” refers to a single region within a single layer of
a design’s layout that can be easily described by two x - y coordinates that
indicate the upper-right and lower-left corners, respectively.
1279
or used as a post-processing step for any systematic-defect
identification method whose output is a set of candidates that
are suspected to be the locations of systematic defects.
References [20] and [21] resemble LASIC but there are
several important differences. First, [20] and [21] rely on simulation, which can be inaccurate and too conservative, i.e.,
it is likely to identify features that may never cause an IC
to fail. Fixing most (if not all) of the hotspots might result
in over-correction which can adversely affect chip area and
performance. Relying on simulation alone also means that
systematic defects that have failing mechanisms that are not
modeled will be missed. In contrast, LASIC uses diagnosisimplicated layout locations that are generated by diagnosing
actual IC failures, and therefore does not suffer from these limitations. In addition, [20] and [21] require the entire design
to be simulated. This limits the scalability of the methodology, especially when many different process corners are
considered. LASIC, on the other hand, may apply process
simulation (which is not required) to very small portions of
the design for validation purposes only. Since only small
portions of the design are analyzed, a comprehensive and
accurate analysis (e.g., Monte Carlo analysis across process
corners) can be easily afforded. Most importantly, LASIC has
the capability to provide failure rates for problematic layout features. In contrast, the identification of layout hotspots
via process simulation (which likely does not account for
resolution enhancement techniques, dummy fill, etc.) does
not quantify the likelihood of a hotspot actually leading to
failure.
References [22] and [23] also closely resembled LASIC
but again there are several subtle but important differences.
First, LASIC analyzes layout images while the approach
in [22] and [23] extracts user-defined layout features. In other
words, LASIC makes no assumption on the importance of
certain layout features but instead automatically discovers the
important layout features. Second, analyzing layout snippets
as images has the added advantage that features from multiple layers can be easily represented in a single image. Third,
a vast number of image processing techniques are immediately available for analysis of layout. Finally, [22] and [23]
use bright-field inspection to find potential defect locations,
but these locations have not yet been shown to cause failure,
which results in another source of uncertainty. LASIC instead
uses diagnosis-implicated regions that are generated by diagnosing actual ICs that are known to have failed. However,
because diagnosis is not perfect, LASIC too will also have to
deal with some level of uncertainty.
The rest of this paper is organized as follows. Section II
describes the details of LASIC. This is followed by a silicon
experiment involving an industrial design and a simulation
experiment involving eight benchmarks [24], [25] presented
in Section III. Finally, the conclusions are presented in
Section IV.
II. S YSTEMATIC D EFECT I DENTIFICATION
In this section, the details of LASIC are described. We begin
by giving an overview, which is then followed by detailed
descriptions of each step in LASIC.
1280
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 34, NO. 8, AUGUST 2015
A. Overview
LASIC consists of four steps: 1) volume diagnosis;
2) layout snippet extraction; 3) snippet clustering; and
4) validation. Volume diagnosis is simply applying diagnosis
to a sufficiently-large number of IC failures. The outcome of
diagnosis is a set of candidates, where each candidate consists
of the suspect net or cell where the defect is believed to reside.
A layout region (i.e., a region involving one or more layers in
the layout that contains the features of interest, for example,
a net and its surrounding nets) is extracted for each candidate.
LASIC then partitions the extracted layout region into one or
more layout snippets, each of which is saved as an image.
Clustering, such as K-means [26], is then applied to group
similar snippet images together to identify any commonalities.
Finally, the identified layout feature is simulated using a process simulator (such as the lithography tool Optissimo [27])
to confirm the existence of a systematic defect. As mentioned
earlier, validation could take on other forms such as PFA.
B. Diagnosis
Our goal is to identify yield-limiting layout features. One
possible approach is to grid the entire layout into many small
regions (i.e., layout snippets) and then perform clustering on
the resulting snippet images. The result of clustering can then
be overlaid with volume-diagnosis data to identify any systematic issues. This approach is possible but inefficient since many
layout features will be easily manufacturable. In other words,
for the aforementioned approach, computational resources are
spent on clustering a large number of “healthy” snippets,
which is not the focus here.2 Therefore, to narrow down the
scope of investigation and to save computing resources, clustering is limited to the layout regions implicated by diagnosis
of failed ICs since these regions are likely to contain any
systematic defects that may exist.
A variety of diagnosis approaches have been proposed
(see [28]–[33]) and the question considered here is which one
is the most suitable for LASIC. A diagnosis approach that
can exactly pinpoint the x-y-z location of the defect would be
ideal. However, this requirement is unrealistic because diagnosis is not perfect and a diagnosis outcome typically consists
of multiple candidate locations. Diagnostic resolution [34] is
a measure of the effectiveness of a diagnosis methodology
to logically localize the fault. A higher diagnostic resolution
is desired since there is less uncertainty concerning the failure location. Diagnostic resolution is often used as a metric
for comparing different techniques and evaluating the merit of
a particular diagnostic approach.
The diagnosis approach adopted here is called
DIAGNOSIX [29] since it uses a layout-based approach
that can distinguish logically-equivalent faults, leading to an
improved diagnostic resolution. High diagnostic resolution
equates to fewer snippets used for the clustering step,
resulting in a higher confidence. It should be emphasized,
however, that other diagnosis techniques can and have been
used instead of DIAGNOSIX.
2 Identification of easy-to-manufacture features, however, may be useful for
forming a set of allowable patterns for DFM objectives.
Fig. 1.
Flow diagram of the Carnegie Mellon layout analysis tool (LAT).
C. Layout Snippet Creation
An in-house layout analysis tool (LAT) is used to extract
layout snippets of diagnosis-implicated regions [35]. LAT
makes use of three open-source C++ libraries, namely, the
OpenAccess library (OA) [36], the computational geometry
algorithm library (CGAL) [37], and the Cairo library [38]. The
OA library performs design-format conversion and provides
high-performance database operations. The CGAL library contains geometric algorithms suitable for layout analysis. The
Cairo library is used for image rendering and generation.
Fig. 1 illustrates the flow implemented by LAT. Each candidate reported by diagnosis is retrieved from an annotated
layout database to find its layout geometries, which are further
processed to create snippet images.
1) Database Preparation: Since the typical description
of a design layout is graphic data system (GDS) [39], the
GDS-to-OA translator provided in the OA package is used to
convert GDS to the format of the OA database. In addition,
since GDS typically consists of purely geometric information,
the resulting OA database will not contain any logical connectivity information, or any net, or cell names. Thus, the
converted database is further processed to add this information. This is a one-time task whose cost is amortized over
repeated use of the annotated OA database to extract snippet
images for each chip failure of interest.
2) Candidate Geometry Processing: The input to LAT is
a set of candidates that have been identified by diagnosing
a failed IC. Since the database has been annotated with connectivity information, the polygons associated with a given
candidate can be easily accessed. Clearly, different candidates
consist of different polygons that reside in various layers,
thus making their comparison nontrivial. Fig. 2 illustrates this
diversity. Fig. 2(a) shows a net that is relatively long that
spans two layers (highlighted with a dashed boundary), while
Fig. 2(b) shows a short net that resides in only one layer. If
only a single snippet image is extracted for each net, then
they would either have a very different scale (with a fixed
image dimension) or have a very different image dimension
(with a fixed scale). In either case, it is difficult to directly
compare these two nets. Although nets are used as examples
here, the argument also applies to other types of diagnosis
candidates, such as a cell defect.
TAM AND BLANTON: LASIC
1281
Fig. 2. Diagnosis-implicated layout region for (a) long net that spans two
layers and (b) a short net that spans only one layer. Both nets of interest are
identified with a dashed boundary.
Fig. 4.
Fig. 3.
Snippets of a net polygon with (a) vertical alignment and
(b) horizontal alignment.
To overcome this challenge, the layout region implicated
by a candidate is split into smaller regions of fixed scale and
dimension that are called layout snippets. This standardizes the
scale and the dimension of the layout regions for easy comparison. Unfortunately, snippet alignment remains an issue, as
illustrated in Fig. 3. Alignment here refers to the proper choice
of the snippet-center location to facilitate snippet comparison.
Fig. 3(a) shows three different snippets generated with different vertical alignment while Fig. 3(b) shows three different
snippets generated with different horizontal alignment. Clearly,
all the snippets appear dissimilar, although the same polygon
of interest (highlighted with a dashed boundary) is partially
captured by all. It is also clear that there are two degrees of
freedom while aligning the snippets since a one-layer snippet
is a 2-D object.
To address the snippet alignment issue, a snippet center is
required to be on the center line of each polygon that belongs
to the candidate. (The extraction of the polygon center line
will be described in detail in the next section.) For example,
if the center line of a candidate polygon is a horizontal line,
as shown in Fig. 3(a), then the snippet center is required to
be vertically aligned with this center line. The rationale for
aligning in this manner stems from the fact that a systematic
defect is likely caused by layout features that are difficult to
manufacture. These features are defined by the polygons that
make up the candidate as well as its neighboring polygons.
Since we do not have prior knowledge of which polygons in
the neighborhood play a role in the systematic defect, it is best
to center the candidate polygons in the snippet and to ensure
that the radius of influence is carefully chosen.
Requiring the snippet center to align with the center line
of a candidate polygon, however, only restricts one degree of
freedom. The snippet center can be in an arbitrary position
Illustration of splitting of (a) layout region into (b) layout snippets.
along the center line and therefore may still cause a misalignment issue. This is illustrated in Fig. 3(b). Specifically,
despite the vertical alignment of the snippet center with the
horizontal center line, horizontal misalignment of the snippet
still occurs. Thus, the snippet center is further restricted to
specific points on the center line of the candidate polygon
as shown in Fig. 4. Again, the candidate polygon is highlighted with a dashed boundary. The lines inside the polygon
are the center lines. There are three types of locations that can
serve as a snippet center: 1) endpoints (represented as dots) of
the center line; 2) projection points (represented as triangles)
from the center-line endpoints of the neighboring polygons
onto the center line of the candidate polygon; and 3) midpoints (represented as crosses) between the first two types of
points. (If there is no projection point, a midpoint will still be
calculated between the endpoints of the candidate center line.)
The rationale behind restricting the snippet center in this way
stems from the fact that these locations define a layout feature
or a layout-feature interaction. Clearly, the endpoints define
the center line, which captures the topology of the routing
polygon. Projection points from neighboring polygons define
the interactions between the neighbors and the candidate. The
midpoints are of interest because they aid the capture of the
layout pattern between the first two types of points. Here is
another interpretation of the rationale: if we imagine moving
the snippet center along the center line of a candidate polygon, starting at one of its endpoints, then the projection points
define changes in neighborhood (i.e., the entrance or exit of
a neighboring polygon) and the midpoints define the layout
patterns before and after the neighborhood change.
3) Center Line Extraction: The image scale, dimension,
and alignment requirements have motivated the use of
the straight-skeleton algorithm [40] provided by the CGAL
library. A straight skeleton of a polygon is defined as “the
union of the pieces of angular bisectors traced out by the
polygon vertices” when the polygon is being shrunk [40].
Fig. 5(a) illustrates the straight skeleton (shown in dotted lines
and built using the CGAL library) of a polygon. A vertex
that belongs to both the polygon and the straight skeleton is
called a contour vertex, while a vertex that belongs only to
the straight skeleton is called a skeleton vertex. For LASIC,
however, only the skeleton vertices are of interest since they
alone define the center line of the polygon as illustrated in
Fig. 5(b). The skeleton vertices therefore form the endpoints
of the center line.
1282
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 34, NO. 8, AUGUST 2015
Fig. 5. Illustration of (a) a straight skeleton of a polygon and (b) the center
line of the same polygon.
Use of the straight-skeleton algorithm allows the center line
of a polygon to be extracted. Center lines are extracted for both
the candidate polygon and its neighboring polygons, as shown
in Fig. 4. The center-line endpoints for the candidate polygon
are immediately available after the center-line extraction (they
are in fact the skeleton vertices). The projection points can
be easily calculated once the center lines of the neighbors are
determined. The midpoints can then be dynamically calculated
after all the projection points are determined. If the extracted
center line of the candidate polygon has multiple line segments
as shown in Fig. 5, then each line segment will be processed
separately to extract the projection points and the midpoints.
4) Snippet Image Generation: The layout snippets are
saved as images using the Cairo library, which is an opensource graphics library that is widely used for image rendering.
The Cairo library is platform-independent and is capable of
producing an image in a variety of formats. C++ code using
the OA library has also been written to support the GDS format
as well. For clustering, the portable network graphics image
format is used. Once the yield-limiting layout features are
identified, the corresponding layout snippet can be saved in
the GDS format for further analysis (e.g., lithography simulation). It should be noted that the size of the layout snippet can
be easily adjusted to satisfy the requirements of the subsequent
analysis, if needed.
After the images are generated, the discrete cosine transform (DCT) [41] is performed on each image. Let A be
a matrix of pixel values that represent a snippet image, B be
the coefficient matrix after the DCT, and D be the DCT matrix.
Then, the image A can be written as A = D’BD. The advantage of using DCT representation lies in the fact that the dominant DCT coefficients are concentrated around the upper-left
corner of B. In other words, the image matrix A can be approximated using only the dominant coefficients. This approximation is a widely used technique in image processing; in
fact, it is the method used in the original JPEG standard [42].
For LASIC, this approximation provides tremendous speed-up
because only the dominant coefficients are kept for subsequent analysis. For an image A that has 100-by-100 pixels,
only ∼150 DCT coefficients are needed to ensure accuracy for the analysis. Comparing with the original data size
of a 10,000-pixel image (10 kilobytes), the approximation
(150 bytes) has a reduction of 98.5%. It should be emphasized
that DCT is not the only suitable transform, other transforms,
such as the wavelet transform [43], can also be easily used.
In fact, more general dimensionality reduction techniques
(e.g., principal component analysis [44]) can be used as well.
D. Clustering
Clustering, a form of unsupervised learning [44], represents
a class of data exploratory techniques that partitions objects
into groups based on their similarity. Similar objects are ideally
put in the same cluster while dissimilar objects are placed in
different clusters. The goal of clustering is to discover structure
in the data by exploring similarity within the data. Employing
clustering involves the following [45].
1) Representation of the object, which can involve feature
extraction or selection.
2) A measure of similarity between objects.
3) Selection and application of a clustering algorithm.
Object representation is accomplished using the process
described in the previous section. More specifically, the objects
to be clustered are center-aligned snippet images of fixed
dimension and scale, each of which is represented as a subset
of its DCT coefficients. There are a number of advantages
of using this type of representation. First, it allows direct
comparison of snippet images (e.g., two images can be
compared using their selected DCT coefficients). Second, it
is easy to include geometries from different layers in the
same image by appropriately setting color transparency values (also called the alpha channel) of the layers involved.
This is important when the goal is to identify a systematic defect caused by layout features that span multiple
layers (e.g., a bridge induced by chemical-mechanical polishing (CMP) [46]). Third, other image processing techniques
(such as the distance transform [47]) can be readily applied if
desired.
Before a clustering technique can be applied, a similarity
definition is required. Suppose the DCT coefficients of the
two images are represented as two vectors X and Y, and let
xi in X (yi in Y) be a component index in a vector, then
a common metric for comparing/clustering the two vectors
is the summation of the Euclidean distances for each vector
component
(xi − yi )2 .
i
Two images are similar if they have a very small (total)
Euclidean distance in terms of their selected DCT coefficients.
Conversely, two images are dissimilar if they have a large
Euclidean distance. It should be noted that Euclidean distance
is chosen because of its popularity. Other distance functions,
such as the cosine distance [48], can also be easily used and
has been explored as well. It is found that they generally give
similar results.
Having defined the object representation and the similarity metric, the third step is to choose a suitable clustering
algorithm. Data clustering is an extensively researched field
and there are many clustering algorithms available [45]. The
K-means algorithm [26] is one of the most popular approaches
because of its ease of implementation and high speed. Other
approaches include mixture resolving [44] and various graphtheoretic approaches [49], [50]. Due to its simplicity, the
K-means algorithm is employed which we describe next.
Let x1 , x2 , . . . , xN denote the N data points to be clustered,
K be the desired number of clusters, µ1 , µ2 , . . . , µK be the
TAM AND BLANTON: LASIC
1283
cluster centers, and rij denote the potential assignment of the
ith data point to the jth cluster where
1 if the ith data point is assigned to the jth cluster
rij =
0 if otherwise.
The objective of K-means is to minimize the following cost
function [44]:
N K
rij × dist(xi , µj )
C=
i=1 j=1
where dist(xi , µj ) is the distance function that measures similarity of data point xi and the cluster center µj . The total
Euclidean distance defined earlier is used. From the expression of the cost function C, it is clear that the goal is to choose
cluster centers (i.e., µj , 1 ≤ j ≤ K) and data-point assignments
(i.e., rij ) such that the distance sums (from the data points to
their assigned cluster centers) is minimized. This problem is
NP-hard [51], so an approximate heuristic is typically used to
find a solution. From the cost-function expression, it is also
clear that the choice for K determines the number of clusters.
It is nontrivial to choose an optimal K. In general, K is chosen
empirically or by heuristics that penalize data fragmentation.
For LASIC, K is chosen by using a simple heuristic
min
K
subject to rij dist(xi , µj ) < T ∀i, j.
In other words, we choose the smallest K such that all the
data-point-to-cluster-center distances are smaller than a predefined threshold T (for some T > 0). Intuitively, as long as each
data point is reasonably close to its cluster center, increasing
K is halted to avoid fragmentation of the data. To improve
efficiency, binary search is used to search for K. A hard limit
is also used to restrict K to a reasonably small value. Finding
a better and more efficient heuristic for identifying the optimal value of K is an open problem in the machine-learning
community.
There are several options for employing K-means for the
problem addressed in this paper. The simplest approach is to
create snippet images for each candidate of interest and provide the resulting images directly to the K-means algorithm.
Unfortunately, this naïve approach unintentionally causes the
layout features from a single candidate that occupy a large
layout region to adversely bias the cluster centers, regardless
of the presence of systematic defects. Fig. 6 illustrates this
situation. Specifically, Fig. 6(a) shows seven snippet images
extracted from a long net while Fig. 6(b) shows three snippet
images extracted from a short net. It is clear that there exists
similarity for the images within the following groups: a1 –a7
and b1 –b3 . In other words, snippet images derived from the
same net tend to be strongly correlated (if not identical) and
have a small distance between each other. This is especially
the case for a long net. If these images are provided as input to
the K-means algorithm, then images a1 –a7 may form a cluster
center, regardless of whether a systematic defect is present in
a1 –a7 . The same problem may occur for images b1 –b3 as well.
A novel approach described here is to: 1) perform K-means
on the snippet images for each candidate first; 2) select representative snippet images from each candidate using the
Fig. 6. Illustration of (a) correlated snippet images resulting from splitting
a long net, and (b) the snippet image from a short net.
resulting clusters based on the cluster centers; and 3) perform K-means again on the representative snippet images
from all the candidates. This approach identifies unique
representative layout features from each candidate so that
repeating/similar features from the same candidate will not
bias the overall clustering process. In addition, this approach
has the advantage that the number of images to be clustered
(in the second pass) is substantially reduced, thereby achieving
a faster runtime. Using again the nets of Fig. 6, only snippet
images {a1 –a4 , b1 –b2 } will be used as input to the second pass
of the K-means algorithm. The two-pass K-means algorithm
is implemented in MATLAB [52].
Unfortunately, the distance metric adopted and many other
commonly used distance metrics (e.g., cosine distance) are not
rotation- or reflection-invariant with respect to the image orientation. Fig. 7 illustrates this problem. It may or may not be
clear that the eight snippets shown are exactly the same, since
they are shown with different orientations. The ideal distance
metric should give a distance of zero between any two of these
images, but this is not the case because the Euclidean metric is
not immune to orientation. One possible solution is to define
the distance metric to be the minimum distance between two
images A and B under all orientations of B (note that, we do
not need to consider all orientations of A). This is the approach
adopted in [20] and [21] for performing hotspot analysis. This
method is accurate, but would tremendously increase runtime
(by up to a factor of 8×) since the distance-metric evaluation is
a core step in all clustering algorithms [45]. A faster approach
is to generate the clusters first using the current distance metric, and then merge the resulting clusters afterwards using the
images at the cluster centers only, taking into account all possible orientations of the cluster centers. This is advantageous
since the cluster centers are only used for the more expensive
distance metric calculation. The number of cluster centers is
typically far less than the number of images. This is a heuristic, however, which means it is possible that two or more of
the images in the merged cluster have a distance that is larger
than the threshold used in the second K-means pass since only
the cluster centers are considered during the merging process.
To merge the clusters, the clustering outcome is represented
using a graph, where each vertex represents the snippet image
closest to the cluster center and an edge exists between two
vertices (i.e., images), if and only if the minimum distance
between the two images, including their eight rotated and
1284
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 34, NO. 8, AUGUST 2015
Fig. 7.
Illustration of eight different orientations of a single layout pattern.
Fig. 8.
Overall flow of the snippet image clustering process.
reflected representations, is less than a certain threshold. (This
threshold should be chosen to have a similar value as the
threshold used in the second K-means pass.) Clearly, two
clusters should be merged if an edge exists between their corresponding vertices in the graph. Thus, the cluster-merging
problem is equivalent to identifying the connected components
of the graph, i.e., clusters in the same connected component should be merged. This problem can be easily solved
by using a depth-first search in linear time [53]. It is possible
that merged clusters have elements (i.e., snippets) that have
a distance greater than the threshold but we have not found
this to be an issue in this paper. Fig. 8 illustrates the overall
flow for clustering which typically only takes a few minutes
of compute time for processing.
E. Simulation Validation
After clustering, each snippet image is assigned a label to
indicate its cluster. For example, suppose 100 snippet images
are grouped into six clusters, implying that each snippet image
will be labeled with an integer from 1 to 6. The clusters
formed are sorted according to their sizes (i.e., the number
of data points assigned to a cluster) in decreasing order. The
resulting order is called the rank of the cluster. Specifically,
the largest cluster has rank one, the second largest cluster has
rank two, and so on. The dominant clusters are of particular
interest because they represent substantial commonalities in
layout features among the observed failures and are likely to
contain systematic defects (if any). Snippet images in the same
cluster exhibit some degree of similarity and can be manually
inspected. However, a more effective approach is to choose
a few representatives from each cluster for further analysis.
This is achieved by sampling several snippet images closest
to the cluster centers. In this paper, lithography simulation of
the layout snippets is performed to identify any lithographyinduced hotspots. Hotspot investigation is performed since
sub-wavelength lithography is ubiquitously deployed in nanoscale technologies and is a major source of systematic issues.
The software tool Optissimo [27] is used to perform the
simulation.
Analysis does not have to be limited to lithography, however, for example, CMP simulation (using CMP analyzer [54]
for instance) can be used to identify any yield-limiting hotspots
induced by CMP. In fact, simulators for any yield-loss mechanism can be employed to investigate the existence of any
systematic issues that may explain a large cluster of failures.
Of course, each source of failure may require a different radius
of influence for constructing the snippet images.
As stated earlier, although a simulation-based approach is
invoked in this paper for validating the existence of a systematic defect, other more comprehensive and conclusive methods
can also be used including PFA.
Once the systematic defects are identified and understood,
it is possible to tune the manufacturing process to minimize
their occurrences to improve yield. In addition, it is possible
to describe yield-limiting layout features directly using design
rule check tools such as calibre pattern matching [55]. These
rules can then be applied to the current design in production to estimate the failure rate of these features by noting
their frequency of occurrence in diagnosis-implicated regions
to understand their yield impact. However, the details of this
analysis are beyond the scope of this paper. If the systematic defects identified have a significant yield impact, then
the corresponding DFM rules can be provided to the designers so that the systematic defects can be mitigated in future
designs [10], [56].
III. E XPERIMENTAL R ESULTS
In this section, experimental results involving both test
data from actual chip failures and simulation data are
presented.
A. Silicon-Data Experiment
An experiment was carried out using an industrial test chip
from LSI Corporation to study the effectiveness of LASIC.
The test chip, fabricated using 130-nm technology, consists
of 384 64-bit arithmetic logic units (ALUs). Each ALU has
TAM AND BLANTON: LASIC
Fig. 9. Illustration of clustered snippet images from (a) one cluster and
(b) second cluster.
approximately 3,000 gates and is tested using approximately3
230 tests that achieve ∼100% stuck-at fault coverage. In this
paper, 6,980 failing ICs are used. Removal of non-ALU failures, multiple-ALU failures, and failures with massive failing
bits lead to 738 unique failing ICs that are used for further
analysis in LASIC. These failing ICs are successfully diagnosed using DIAGNOSIX [29] resulting in 2,168 diagnosis
candidates. LASIC is applied to all 2,168 diagnosis candidates.
In addition, in order to validate LASIC, two additional ICs
with preexisting PFA results are also included in this experiment. Both ICs have two diagnosis candidates, both of which
are included in the experiment.
LAT (described in Section II-C) is applied to the 2,172 diagnosis candidates using a 2-by-2 µm area for defining the
snippet size. Both the width and height of the snippet size
are more than 15× larger than the minimum feature size
(0.13 µm) and therefore should be sufficient for capturing
all optical interactions [57]. To ensure accuracy, each snippet
image is chosen to have a resolution of 100-by-100 pixels.
Figs. 8 and 9 show LASIC-generated snippet images with this
resolution. It is clear that the layout features are sharply represented with this resolution. A total of 291,249 snippet images
are generated for the 2,172 diagnosis candidates. Column 5
of Table I shows the number of extracted snippet images for
each layer (column 1).
The two-pass K-means clustering approach described earlier
is applied to the snippet images for each layer separately since,
in this paper, we are investigating lithography issues which is
a function of one layer only. T 1 (column 2) and T 2 (column 3)
are the thresholds for choosing K for the first-pass and secondpass K-means, respectively. T 1 and T 2 indirectly specify the
desired cluster size and are empirically chosen. T 3 (column 4)
is the threshold used when merging the clusters and is chosen
to have the same value as T 2 . Column 6 of Table I shows
the number of snippet images selected by the first-pass of
K-means. After the execution of the second-pass, the clusters
are merged taking into account the different orientations of
the cluster centers. The number of clusters before and after
the merges are given in columns 7 and 8, respectively, of
Table I. In addition, cluster size (i.e., the number of data points
assigned to a cluster) is measured for all clusters and their
minimum, maximum, and average values are recorded in
columns 9–11, respectively. Cluster size is a measure of the
3 The number of tests varies slightly for each ALU.
1285
importance of a cluster (e.g., a cluster with 20 data points is
not as important as a cluster with 200 data points).
Table I reveals that the clustering outcome via the layers consists of a small number of clusters. This is expected
because most of the images contain a single via in the center
of the image. Occasionally, neighboring vias are included in
the image, and sometimes the difference is large enough to
result in an extra partition. On the other hand, snippet images
in the contact layer are partitioned into many more clusters.
This too is expected since contacts are more densely placed
than vias in general and therefore the same 2-by-2 µm bounding box captures a more diverse set of images in the contact
layer. A similar trend is observed for the polysilicon and metal
layers. In other words, a more densely routed layer results in
more clusters since denser routes result in a more diverse set
of snippets. The irregular geometries in the metal-1 layer also
contribute to more clusters. As a result, it is worth exploring
different bounding-box sizes for each layer.
It is also informative to inspect the images in the same
cluster as well as the images that are in different clusters.
The clusters for metal-1 are chosen for illustration purposes since metal-1 contains a diverse set of geometries.
Fig. 9(a) shows four snippet images from a cluster in metal-1,
while Fig. 9(b) shows four snippet images from another
metal-1 cluster. Fig. 9 shows that geometries in the same cluster resemble each other but are not exactly the same while
geometries in different clusters exhibit substantial differences.
This example clearly demonstrates that LASIC is able to
group images with similar layout features together for further
analysis.
The layout snippets that correspond to the snippet images
are fed to Optissimo [27] for lithography simulation. The
size of the snippets has been increased to ensure accurate
simulation.4 Since the LSI test chip for this experiment is
implemented in 130-nm technology, the lithography parameters are chosen to coincide with that technology, as shown in
Table II. (A more detailed explanation of the parameter values
chosen is given in [58].) For this part of the experiment, focus
is again placed on the metal-1 layer only.
Fig. 10 shows an example of a lithography-induced hotspot
(i.e., insufficient contact coverage) identified by the Optissimo
simulation on one of the layout snippets. The simulation result
for the contact layer is also shown so that the hotspot can be
revealed. There are altogether 20 hotspots identified in the
metal-1 layer. These hotspots may cause systematic defects.
More conclusive analysis such as PFA can be used to inspect
the corresponding areas in the corresponding failing ICs to
determine if these hotspots are yield-limiting. If they are
found to be causing significant yield loss, the process can
be adjusted to improve yield. In addition, DFM rules can be
formulated to prevent occurrence of these hotspots in future
designs. For example, the insufficient coverage of the contact
in Fig. 10 can be resolved by increasing the metal-1-to-contact
enclosure requirement.
4 The snippet size for an accurate simulation may not be the same as the
image size used in clustering. For example, CMP depends on a much larger
area depending on the type of model used for identifying CMP-susceptible
geometries and metal densities.
1286
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 34, NO. 8, AUGUST 2015
TABLE I
S UMMARY OF THE T WO -PASS K-M EANS C LUSTERING O UTCOME FOR E ACH L AYER FOR 738 IC FAILURES
TABLE II
P ROCESS PARAMETERS U SED FOR L ITHOGRAPHY S IMULATION
Fig. 11. (a) SEM image of a poly-to-active bridge and (b) its corresponding
layout snippet with its lithography simulation result.
Fig. 12. (a) SEM image of a bridge in the metal-2 layer and (b) its corresponding layout snippet with its lithography simulation result.
Fig. 10. Illustration of a hotspot (insufficient contact coverage) identified
through lithographic simulation of a layout snippet.
To validate LASIC, the clustering outcomes of the two
ICs that have PFA results are examined in detail. Both ICs
have snippet images of their corresponding defect locations
(as identified by PFA) in highly-ranked clusters. Specifically,
the first PFA result is a bridge between a polysilicon gate
and its active region. The snippet image corresponding to this
defect belongs to the second largest cluster (with size 348) in
the polysilicon layer (the maximum cluster size in the
polysilicon layer is 439). Fig. 11 shows the SEM image
of the PFA result and its corresponding layout region. The
lithography simulation from Fig. 11(b) does not show any
abnormalities, indicating that the systematic issue, if any, is
likely due to other reasons.
The second PFA result is a bridge between two nets in
metal-2, and its SEM image and layout region are shown
in Fig. 12. The snippet image corresponding to this defect
belongs to a cluster that is in the top 10.6% of all clusters.
These results are strong evidence that LASIC is able to identify
systematic defects since PFA is a time-consuming process [6]
and therefore is selectively performed on certain ICs when systematic issues are suspected. In other words, because failures
due to random defects are typically not selected to undergo
PFA, we believe this is strong evidence that LASIC can
identify systematic defects.
From the clustering outcome in Table I, it is clear that other
highly-ranked clusters are also present, suggesting the possible
presence of additional systematic defects. Unfortunately, this
TAM AND BLANTON: LASIC
1287
TABLE III
C HARACTERISTICS OF THE D ESIGNS , T ESTS , AND D EFECTS U SED IN THE S IMULATION E XPERIMENTS
cannot be confirmed since further PFA results are unattainable.
In addition, the point should be made that large cluster sizes are
not indicative of the existence of a systematic defect. Deeming
a cluster large or small should not be based on raw element
count but instead be based on a normalization that depends on
the total number of similar snippets within the design.
B. Simulation-Data Experiment
In this section, two sets of simulation experiments are conducted to examine how LASIC performs under the presence of:
1) a single type of systematic defect and 2) multiple types of
systematic defects.
In both experiments, the defect simulation framework
SLIDER (simulation of layout-injected defects for electrical
responses) [59] is used to generate populations of virtual
IC failures. SLIDER achieves fast and accurate defect simulations using mixed-signal simulation. Specifically, SLIDER
injects defects at the layout-level and extracts a circuit-level
netlist of the defect-affected region(s). This ensures that the
defects are accurately represented. Circuit-level simulation is
performed using the extracted netlist of the defect-affected
region(s) to ensure accuracy while logic-level simulation is
performed for the rest of the circuit to keep the runtime
tractable. By using a mixed-signal simulator (such as Cadence
AMS designer [60]), appropriate signal conversion can be
automatically performed at the interface that connects the digital and analog domains. SLIDER can generate defects that
follow a given DDSD distribution (i.e., random defects). It
can also generate defects that share a common layout pattern
(i.e., systematic defects). These two defect-generation modes
are particularly useful for the validation of LASIC.
1) Experimental Setup: Table III summarizes the characteristics of the designs, tests, and defects injected for
both experiments. In both experiments, eight benchmark
circuits [24], [25] are used whose names are shown in column 1 of Table III. These circuits are placed and routed using
Cadence First Encounter. The technology used is the Taiwan
Semiconductor Manufacturing Corporation 180-nm CMOS process from the publicly-accessible Metal Oxide Semiconductor
Implementation Service webpage [62]. The tests for these circuits are generated using Synopsys Tetramax [63] and have
a 100% fault coverage. The corresponding number of gates,
layout area, and number of tests for each circuit are shown in
columns 2–4 of Table III, respectively.
Five different systematic populations and one random population, each consisting of 100 defective chips, are generated for
use in both experiments. Both the systematic and random populations are generated using the systematic- and random-defect
generation modes of SLIDER, respectively. This process is
repeated four times to inject four different types of defects
(namely, metal-2 bridge, metal-2 open, metal-3 bridge, and
metal-3 open). Since LASIC uses logic-level diagnosis as the
starting point, two different defects are treated the same way
if they both affect the same net. Therefore, it is important
to know the number of distinct nets (Ndistinct ) that are actually affected by the defects. These counts are summarized in
columns 5, 6, 8, 10, 12, and 14 of Table III. Column 5 reveals
that the number of distinct nets in the random population is
more than 100. Interestingly, the number of distinct nets in
all the systematic populations is less than 100, as shown in
columns 6, 8, 10, 12, and 14 of Table III. The former happens since a random defect can occur anywhere in the circuit
and can affect multiple nets. The latter occurs since, as shown
in Fig. 7, snippets from the same net tend to be correlated
and this phenomenon also results in systematic defects being
injected along the same net. In addition, it is of interest to
understand the amount of overlap between the random and
systematic defects in terms of the number of common nets
that they both affect since this too will affect the accuracy
of LASIC. For example, a random defect affecting the same
net as a systematic defect can “improve” the accuracy of
LASIC since the affected net also contains the layout pattern
of the systematic defect. Therefore, the number of common
nets (Noverlap ) between each of the five systematic populations
and the random population is recorded in columns 7, 9, 11, 13,
and 15, respectively. Note that the entries in columns 5–15 are
fractional because, as mentioned above, the experiment is performed four times to inject four different types of defects and
therefore the counts are averaged over the defect types.
From Table III, it is clear that as the circuit size increases,
the chance of random and systematic defects affecting the
same net decreases, as expected. It is also clear that the amount
of overlap is less than 20% in general (except for the smaller
circuits C5315 and C6288). This is desirable since this prevents random defects from affecting the accuracy of LASIC
in systematic defect identification.
In both experiments, the systematic defects are carefully
generated so that their underlying layout pattern is not a common layout pattern. Otherwise, the pattern will be found in
1288
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 34, NO. 8, AUGUST 2015
a dominant cluster regardless of whether a systematic issue is
present. Specifically, we identify layout patterns that occur at
least N times (N >> 100) for each metal layer of interest, and
randomly select instances of the pattern for defect injection.
2) Single Systematic Defect Identification: In the first
experiment, failure populations are created by randomly sampling from the “random” population and the “systematic 1”
population, described in the experimental setup. This process
is repeated to generate failure populations with a different proportion of random and systematic defects. For example, in the
first iteration, the population has 100 systematic defects and
0 random defects; in the second iteration, the population has
90 systematic defects and 10 random defects; in the third iteration, the population has 80 systematic defects and 20 random
defects, and so on. The goal is to evaluate how much random
“noise” can be tolerated by LASIC. The experiment is performed using the eight benchmark circuits in Table III. The
experiment is performed four times to inject four types of
defects into these circuits: 1) metal-2 bridge; 2) metal-2 open;
3) metal-3 bridge; and 4) metal-3 open. The results are averaged over the eight circuits. In addition, to isolate the noise
that can happen due to inaccuracies/ambiguities in diagnosis,
this experiment is performed under two scenarios: ideal diagnosis and “real” diagnosis. Ideal diagnosis here means that
the outcome of diagnosis correctly pinpoints the site of the
injected defect with no ambiguity, while real diagnosis can
be inaccurate (i.e., diagnosis outcome does not contain the
injected-defect net) and/or low in resolution (i.e., additional
nets are reported along with the net with the injected defect).
Since virtual failure data is used, ideal-diagnosis results can be
easily obtained by just finding the net whose geometry overlaps with the actual defect location. Real-diagnosis results are
obtained by applying a commercial diagnosis tool to the virtual failure data, which results in inaccuracies/ambiguities due
to the inherent limitation of logic-level diagnosis. This can be
viewed as another source of noise in the data. Comparing the
two scenarios allows us to understand how LASIC performs
under the ideal and real scenarios. The cluster that contains
the underlying pattern of the injected systematic defect is
deemed the correct cluster. For each scenario, the averaged
rank of the correct cluster is plotted against the percentage
of systematic defects injected. The data is summarized in
Fig. 13(a) and (b) for the ideal and real diagnoses, respectively.
It is clear from Fig. 13 that the average rank of the correct
cluster decreases (which of course is desirable since a lower
rank means the layout geometry associated with the injected
defect resides in a larger cluster) as the percentage of systematic defects increases for both ideal and real diagnoses. It
is also evident that the technique is effective for both ideal
and real diagnoses because the correct cluster is present in the
top 40 ranked clusters (top 3%) even when the population only
consists of 20% systematic defects. In addition, there is a steep
increase in the average rank of the correct clusters when the
proportion of the systematic defect decreases from 10% to 0%.
This is expected since the “correct pattern” becomes a random pattern when there is no systematic defect present and
therefore the correct pattern should go into a cluster of very
small size with a very low rank. The performance of LASIC is
expectedly somewhat worse for real diagnosis due to inherent
Fig. 13.
LASIC evaluation for (a) ideal and (b) real diagnoses.
TABLE IV
M ULTIPLE S YSTEMATIC D EFECT C REATION FOR LASIC E VALUATION
ambiguities/inaccuracies associated with logic diagnosis. The
ranks of the correct clusters increase by a factor of ∼2.4 on
average in the real diagnosis scenario.
3) Multiple Systematic Defect Identification: By focusing
on one systematic issue at a time, the previous experiment
evaluates the performance of LASIC in the presence of random defects and diagnosis inaccuracy. However, in practical
situations, there may be multiple systematic issues present
in different proportions. To understand the effectiveness of
LASIC in practice, all the five different systematic populations described in the experimental setup are used. Precaution
has been taken to ensure that the underlying patterns are different for the five systematic populations to mimic the presence
of five different systematic issues for each benchmark circuit.
The random population is also included to mimic the presence
of random defects. The defects from these populations are randomly sampled and mixed in different proportions. Table IV
summarizes the amount of sampling from each population for
each scenario. The same sampling procedure is performed for
each benchmark circuit.
Column 1 shows the name for each defect population,
namely systematic 1–5 and random, corresponding to the five
systematic populations and the one random population. The
remaining four columns define the four different scenarios
considered in this experiment. Each numeric entry in Table IV
defines the amount of sampling from the corresponding population (row) in the corresponding scenario (column). For
example, in the scenario “Dom1,” 70 defects are sampled from
the first systematic population, 70 defects are sampled from
the random population, and 20 defects are sampled from each
of the four remaining systematic populations.
Four different scenarios are considered in this experiment
that includes the following.
1) “Equal” (column 2) where the number of defects from
each systematic population is equal.
TAM AND BLANTON: LASIC
Fig. 14. LASIC evaluation for multiple systematic defects for (a) metal-2
bridge, (b) metal-3 bridge, (c) metal-2 open, and (d) metal-3 open.
2) “Linear” (column 3) where the number of defects is
linearly decreasing for each systematic population.
3) “Dom1” (column 4) where one systematic population
contributes substantially more defects than the rest.
4) “Dom2” (column 5) where two of the systematic populations contribute substantially more defects than the rest.
In each scenario, 70 random defects are included to mimic
the presence of random defects (row 7 of Table IV).
LASIC is applied to each of the four scenarios for each
benchmark circuit. Here, the goal is to evaluate whether
LASIC is able to identify all five systematic defect types.
Therefore, the number of systematic issues correctly identified
is plotted against the number of top-ranked clusters considered, which is varied from 5 to 55. In contrast to the previous
experiment, only real diagnosis is used in this experiment in
order to mimic how LASIC would be deployed in practice.
Again, the experiment is repeated four times to inject four
types of defects into the benchmark circuits: 1) metal-2 bridge;
2) metal-3 bridge; 3) metal-2 open; and 4) metal-3 open. The
results for each of the four defect types are averaged over the
eight circuits and are shown in Fig. 14(a)–(d).
It is clear from Fig. 14 that the number of systematic issues
correctly identified increases rapidly to the ideal value of five
as the number of top-ranked clusters considered increases. As
demonstrated in Fig. 14, LASIC is able to identify over 90%
(i.e., 4.5/5.0) systematic defects in the top 55 clusters for all
defect types considered under all four scenarios. The results
clearly show that LASIC is effective even when there are
multiple systematic-defect types and random defects present.
IV. C ONCLUSION
In this paper, we described LASIC, a comprehensive
methodology for identifying yield-limiting layout features.
1289
LASIC identifies systematic defects by extracting and clustering snippet images of the diagnosis-implicated layout regions.
Further analyses, such as lithography simulation or PFA, can
be used to confirm the systematic issues. Silicon experiment
results have demonstrated that LASIC is effective in grouping snippet images with similar features to identify systematic
defects. Moreover, several lithographic hotspots have been
identified within the dominant clusters. Simulation experiments using SLIDER accurately quantify the effectiveness and
accuracy of LASIC for failure populations affected by either
single or multiple systematic-defect types. LASIC resolves the
missing link in many systematic-defect identification methodologies by providing an automatic method of discovering
failure-causing layout features. LASIC can be integrated into
existing systematic-defect identification methodologies or used
independently. Integration into existing approaches can be easily achieved by using LASIC as a post-processing step to
automatically identify and extract layout features that are problematic. Finally, LASIC is also scalable since the number and
size of the snippets is independent of the design but instead
depends on the failed-chip population size and the resolution
of diagnosis.
V. ACKNOWLEDGMENT
The authors would like to thank the LSI Corporation for
providing design and fail data.
R EFERENCES
[1] C. Schuermyer, K. Cota, R. Madge, and B. Benware, “Identification of
systematic yield limiters in complex ASICS through volume structural
test fail data visualization and analysis,” in Proc. Int. Test Conf., Austin,
TX, USA, 2005.
[2] B. Kruseman, A. Majhi, C. Hora, S. Eichenberger, and J. Meirlevede,
“Systematic defects in deep sub-micron technologies,” in Proc. Int. Test
Conf., Charlotte, NC, USA, 2004, pp. 290–299.
[3] W. C. Tam, O. Poku, and R. D. Blanton, “Systematic defect identification
through layout snippet clustering,” in Proc. Int. Test Conf., Austin, TX,
USA, 2010, pp. 1–10.
[4] R. Turakhia, M. Ward, S. K. Goel, and B. Benware, “Bridging DFM
analysis and volume diagnostics for yield learning—A case study,” in
Proc. IEEE VLSI Test Symp., Santa Cruz, CA, USA, 2009, pp. 167–172.
[5] L. M. Huisman, M. Kassab, and L. Pastel, “Data mining integrated
circuit fails with fail commonalities,” in Proc. Int. Test Conf., Charlotte,
NC, USA, 2004, pp. 661–668.
[6] M. Sharma, C. Schuermyer, and B. Benware, “Determination of
dominant-yield-loss mechanism with volume diagnosis,” IEEE Design
Test Comput., vol. 27, no. 3, pp. 54–61, May/Jun. 2010.
[7] R. Desineni, L. Pastel, M. Kassab, M. F. Fayaz, and J. Lee, “Identifying
design systematics using learning based diagnostic analysis,” in Proc.
Adv. Semicond. Manuf. Conf. (ASMC), San Francisco, CA, USA, 2010,
pp. 317–321.
[8] K. Martin et al., “A rapid yield learning flow based on production integrated layout-aware diagnosis,” in Proc. Int. Test Conf., Santa Clara,
CA, USA, 2006, pp. 1–10.
[9] T. Huaxing, S. Manish, J. Rajski, M. Keim, and B. Benware, “Analyzing
volume diagnosis results with statistical learning for yield improvement,”
in Proc. Eur. Test Symp., Freiburg, Germany, 2007, pp. 145–150.
[10] W. C. Tam and R. D. Blanton, “To DFM or not to DFM?” in Proc.
Design Autom. Conf., New York, NY, USA, 2011, pp. 65–70.
[11] R. Kapur, M. Kapur, and M. Kapur, “Systemic diagnostics for increasing
wafer yield,” U.S. Patent 20 110 040 528, Feb. 17, 2011.
[12] J. E. Nelson, “Using integrated circuits as virtual test structures to extract
defect density and size distributions,” Ph.D. dissertation, Dept. Electr.
Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA, 2010.
[13] X. Yu and R. D. Blanton, “Estimating defect-type distributions through
volume diagnosis and defect behavior attribution,” in Proc. Int. Test
Conf., Austin, TX, USA, 2010, pp. 1–10.
1290
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 34, NO. 8, AUGUST 2015
[14] X. Yu, Y.-T. Lin, W.-C. Tam, O. Poku, and R. D. Blanton, “Controlling
DPPM through volume diagnosis,” in Proc. IEEE VLSI Test Symp.,
Santa Cruz, CA, USA, 2009, pp. 134–139.
[15] S. Eichenberger, J. Geuzebroek, C. Hora, B. Kruseman, and A. Majhi,
“Towards a world without test escapes: The use of volume diagnosis to
improve test quality,” in Proc. Int. Test Conf., Santa Clara, CA, USA,
2008, pp. 1–10.
[16] Y.-T. Lin and R. D. Blanton, “METER: Measuring test effectiveness
regionally,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst.,
vol. 30, no. 7, pp. 1058–1071, Jul. 2011.
[17] M. Sharma et al., “Efficiently performing yield enhancements by identifying dominant physical root cause from test fail data,” in Proc. Int.
Test Conf., Santa Clara, CA, USA, 2008, pp. 1–9.
[18] J. Jahangiri and D. Abercrombie, “Value-added defect testing techniques,” IEEE Design Test Comput., vol. 22, no. 3, pp. 224–231,
May/Jun. 2005.
[19] W. Maly and J. Deszczka, “Yield estimation model for VLSI artwork
evaluation,” Electron. Lett., vol. 19, no. 6, pp. 226–227, 1983.
[20] J. Ghan et al., “Clustering and pattern matching for an automatic hotspot
classification and detection system,” Proc. SPIE, vol. 7275, Mar. 2009,
Art. ID 727516.
[21] N. Ma et al., “Automatic hotspot classification using pattern-based
clustering,” Proc. SPIE, vol. 6925, Mar. 2008, Art. ID 692505.
[22] C. Huang et al., “Using design based binning to improve defect excursion control for 45 nm production,” in Proc. Int. Symp. Semicond.
Manuf., Santa Clara, CA, USA, 2007, pp. 1–3.
[23] S. Jansen et al., “Utilizing design layout information to improve efficiency of SEM defect review sampling,” in Proc. Adv. Semicond. Manuf.
Conf. Workshop, Cambridge, MA, USA, 2008, pp. 69–71.
[24] F. Brglez and H. Fujiwara, “A neutral netlist of 10 combinational benchmark circuits and a target translator in Fortran,” in Proc. Int. Symp.
Circuits Syst., Kyoto, Japan, 1985, pp. 1929–1934.
[25] F. Brglez, D. Bryan, and K. Kozminski, “Combinational profiles of
sequential benchmark circuits,” in Proc. Int. Symp. Circuits Syst.,
Portland, OR, USA, 1989, pp. 1929–1934.
[26] J. B. Macqueen, “Some methods for classification and analysis of multivariate observations,” in Proc. Berkeley Symp. Math. Statist. Probab.,
Berkeley, CA, USA, 1967, pp. 281–297.
[27] The Optissimo User Manual, 6.0 ed. PDF Solutions Inc., San Jose, CA,
USA, 2001.
[28] T. Bartenstein et al., “Diagnosing combinational logic designs using the
single location at-a-time (SLAT) paradigm,” in Proc. Int. Test Conf.,
Baltimore, MD, USA, 2001, pp. 287–296.
[29] R. Desineni, O. Poku, and R. D. Blanton, “A logic diagnosis methodology for improved localization and extraction of accurate defect
behavior,” in Proc. Int. Test Conf., Santa Clara, CA, USA, 2006,
pp. 1–10.
[30] D. B. Lavo, I. Hartanto, and T. Larrabee, “Multiplets, models, and the
search for meaning: Improving per-test fault diagnosis,” in Proc. Int.
Test Conf., Baltimore, MD, USA, 2002, pp. 250–259.
[31] S. Venkataraman and S. B. Drummonds, “POIROT: A logic fault diagnosis tool and its applications,” in Proc. Int. Test Conf., Atlantic City,
NJ, USA, 2000, pp. 253–262.
[32] S. D. Millman, E. J. McCluskey, and J. M. Acken, “Diagnosing CMOS
bridging faults with stuck-at fault dictionaries,” in Proc. Int. Test Conf.,
Washington, DC, USA, 1990, pp. 860–870.
[33] Y. Sato et al., “A persistent diagnostic technique for unstable defects,”
in Proc. Int. Test Conf., Baltimore, MD, USA, 2002, pp. 242–249.
[34] K. Kubiak et al., “Exact evaluation of diagnostic test resolution,” in
Proc. Design Autom. Conf., Anaheim, CA, USA, 1992, pp. 347–352.
[35] W. C. Tam, O. Poku, and R. D. Blanton, “Precise failure localization using automated layout analysis of diagnosis candidates,” in Proc.
Design Autom. Conf., Anaheim, CA, USA, 2008, pp. 367–372.
[36] (Nov. 2010). Si2: Si2 Downloads News. [Online]. Available:
http://www.si2.org/openeda.si2.org
[37] (Nov. 2010). The Computational Geometry Algorithms Library.
[Online]. Available: http://www.cgal.org
[38] (Nov.
2010).
cairographics.org.
[Online].
Available:
http://www.cairographics.org
[39] The Graphic Data System (GDS) II Format, Cadence Design Syst. Inc.,
San Jose, CA, USA.
[40] O. Aichholzer et al., “A novel type of skeleton for polygons,” J. Univer.
Comput. Sci., vol. 1, no. 12, pp. 752–761, 1995.
[41] N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,”
IEEE Trans. Comput., vol. C-23, no. 1, pp. 90–93, Jan. 1974.
[42] G. K. Wallace, “The JPEG still picture compression standard,” Commun.
ACM, vol. 34, no. 4, pp. 30–44, 1991.
[43] D. F. Mix and K. J. Olejniczak, Elements of Wavelets for Engineers and
Scientists. Beijing, China: China Machine Press, 2006.
[44] C. M. Bishop, Pattern Recognition and Machine Learning, 1st ed.
New York, NY, USA: Springer, 2006.
[45] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: A review,”
ACM Comput. Surveys, vol. 31, no. 3, pp. 264–323, 1999.
[46] J. Mekkoth et al., “Yield learning with layout-aware advanced scan
diagnosis,” in Proc. Int. Symp. Test. Fail. Anal., 2006, pp. 412–418.
[47] R. Fabbri et al., “2D Euclidean distance transform algorithms: A comparative survey,” ACM Comput. Surveys, vol. 40, no. 1, pp. 1–44,
2008.
[48] I. S. Dhillon and D. S. Modha, “Concept decompositions for large
sparse text data using clustering,” Mach. Learn. J., vol. 42, nos. 1–2,
pp. 143–175, 2001.
[49] S. Jianbo and J. Malik, “Normalized cuts and image segmentation,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 888–905,
Aug. 2000.
[50] A. Ng, M. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an
algorithm,” in Proc. Neural Inf. Process. Syst., Vancouver, BC, Canada,
2001, pp. 849–856.
[51] M. Mahajan, P. Nimbhorkar, and K. Varadarajan, “The planar k-means
problem is NP-hard,” in Proc. Int. Workshop Algorithms Comput.,
Kolkata, India, 2009, pp. 274–285.
[52] The MATLAB Help Manual, 2009a ed., The MathWorks Inc., Natick,
MA, USA, 1984–2009.
[53] T. Cormen et al., Introduction to Algorithms. Cambridge, MA, USA:
MIT Press, 2001.
[54] The Calibre CMPAnalyzer User Manual, 1.0 ed., Mentor Graphics
Corp., Wilsonville, OR, USA, 2008.
[55] The Calibre Pattern Matching User Manual, 1.0 ed., Mentor Graphics
Corp., Wilsonville, OR, USA, 2010.
[56] W. C. Tam, O. Poku, and R. D. Blanton, “Automatic DFM rule discovery
through layout snippet clustering,” in Proc. Int. Workshop Design Manuf.
Yield, Austin, TX, USA, 2010.
[57] T. Jhaveri et al., “Enabling technology scaling with “in production”
lithography processes,” Proc. SPIE, vol. 6924, Mar. 2008, Art. ID
69240K.
[58] W. C. Tam, R. D. Blanton, and W. Maly, “Evaluating yield and testing
impact of sub-wavelength lithography ” in Proc. IEEE VLSI Test Symp.,
Santa Cruz, CA, USA, 2010, pp. 200–205.
[59] W. C. Tam and R. D. Blanton, “SLIDER: A fast and accurate defect
simulation framework,” in Proc. IEEE VLSI Test Symp., Dana Point, CA,
USA, 2011, pp. 172–177.
[60] The Virtuoso AMS Designer Environment User Guide, Cadence Design
Syst. Inc., San Jose, CA, USA, 2009.
[61] The Cadence Encounter Reference Manual, Cadence Design Syst. Inc.,
San Jose, CA, USA, Inc.
[62] (Nov. 2010). MOSIS Integrated Circuit Fabrication Service. [Online].
Available: http://www.mosis.com
[63] The Tetramax User Guide, Synopsys Inc., Mountain View, CA, USA,
2009.
Wing Chiu (Jason) Tam (S’03) received the
B.S. degree from National University of Singapore,
Singapore, and the M.S. degree from Carnegie
Mellon University, Pittsburgh, PA, USA, in 2003 and
2008, respectively.
He was at Advanced Micro Devices and Agilent
Technology, Singapore, from 2003 to 2006. He
is currently a Practicing Engineer with Nvidia,
Santa Clara, CA, USA.
Ronald D. (Shawn) Blanton (S’93–M’95–SM’03–
F’08) received the B.S. degree in engineering from
Calvin College, Grand Rapids, MI, USA, the M.S.
degree in electrical engineering from the University
of Arizona, Tucson, AZ, USA, and the Ph.D.
degree in computer science and engineering from
the University of Michigan, Ann Arbor, MI, USA,
in 1987, 1989, and 1995, respectively.
He is currently a Professor of Electrical and
Computer Engineering, Carnegie Mellon University,
Pittsburgh, PA, USA, where he also serves as
the Director of the Center for Silicon System Implementation, and the Founder
and the Leader of the Advanced Chip Testing Laboratory. His current research
interests include test and diagnosis of integrated heterogeneous systems.
Download