Automated Volume Diagnostics

advertisement
Automated Volume Diagnostics
Accelerated yield learning in 40nm and
below technologies
John Kim
Yield Explorer Applications Consultant
Synopsys, Inc
June 19, 2013
© Synopsys 2013
1
Agenda
Current Challenges
Diagnostics vs Volume Diagnostics
Analysis Flows with Volume Diagnostics
Collaboration between Fab/Fabless
Conclusions
© Synopsys 2013
2
Korea Test Conference 2013
Systematic Issues Rising Dramatically
Trend of Initial Yield Loss by Technode
Yield Loss (%)
Systematic
Designbased yield
Litho-based
issues
yield issues
Random
Defect-based
yield issues
Technology Node (nm)
•
•
•
Systematic contribution to initial yield losses are worsening at newer
technodes
Random Defect issues are also increasing but can be managed with
existing methods and infrastructure
Some different methods are needed to address these new mechanisms
**Chart data source - IBS
© Synopsys 2013
3
Korea Test Conference 2013
How to address those systematics?
• Traditional yield learning methods can address random
defectivity sources…
–
–
–
–
–
–
–
Inline inspections
Technology structural and IP testchips
Single production volume yield learning vehicle
Memory array based detection and FA localization
Various EFA visualization techniques
Litho/DFM simulation
Legacy learning
• But what about product/technology specific design and
layout systematics?
© Synopsys 2013
4
Korea Test Conference 2013
ATPG Diagnostics Based Yield Learning
• ATPG Diagnostics based Yield Learning gives us an
enhanced level of analysis and characterization capability
• Most logic products already use
ATPG for automated high test
coverage pattern generation
• Diagnostics provides very high
localization of likely defective
region, often down to a few square
microns with physical diagnostics
usage
• Volume diagnostics adds statistical
confidence to identify root cause
© Synopsys 2013
5
Korea Test Conference 2013
Agenda
Current Challenges
Diagnostics vs Volume Diagnostics
Analysis Flows with Volume Diagnostics
Collaboration between Fab/Fabless
Conclusions
© Synopsys 2013
6
Korea Test Conference 2013
How Logic Diagnostics Work
P1:11001010
P2:00011101
P3:10100011
• Assumptions:
– Many ATPG patterns
– ATE failures recorded from all those
patterns
• Most faults produce a unique test
response signature
• Find the fault which most closely matches
the defect signature from the ATE
P1:PPPPPPP
P1:PPPPFPP
P2:PPPPPPP
P2:PPFFFPF
P3:PFPPPPP
P3:PFFPPPP
Signature for fault B
A
Following pages provide basics of how scan diagnostics works
© Synopsys 2013
7
Korea Test Conference 2013
Load Data
0 0
ATE
11 10
D Q
SI
0
Scan Chain1
Expect Data
0 0 0 0 0 0
D Q
SI
Combinational
Logic
00
D Q
SI
Scan Chain2
0
D Q
SI
Scan
ChainUnloading
Loading
System
Clock
Scan
Chain
© Synopsys 2013
8
D Q
SI
Korea Test Conference 2013
D Q
SI
Load Data
0 0
ATE
11 10
D Q
SI
0
Scan Chain1
Expect Data
0 0 0 0 0 0
D Q
SI
11
Scan Chain2
0
D Q
SI
System
Unloading
Clock
Loading
Setup
© Synopsys 2013
9
Miscompare
Combinational
Logic
Defect
D Q
SI
D Q
SI
Korea Test Conference 2013
D Q
SI
Miscompare
D Q
SI
0
D Q
SI
11
10
Scan Chain2
D Q
Miscompare
SI
Combinational
Logic
Defect
D Q
SI
© Synopsys 2013
Scan Chain1
0
D Q
Miscompare
SI
Korea Test Conference 2013
D Q
SI
Diagnostics
• Subnet diagnosis enables even further localization of
open defects
Fail
Fail Region
Fail
Driver
Pass
© Synopsys 2013
11
Korea Test Conference 2013
What is Volume Diagnostics?
• Performs statistical analysis of diagnostics results from
multiple failing chips
• Identifies systematic, yield-limiting issues by using design
data
• Provides actionable information on high value candidates for
Physical Failure Analysis
(PFA)
So why
volume
• Can apply to bothdiagnostics
chain or logic diagnostics
vs
single diagnostics?
Relative Yield Fallout
Prioritizing the Systematic Yield Issues
Defect Type
Category 1
© Synopsys 2013
12
Category 2
Category 3
Korea Test Conference 2013
Category 4
Why Volume Diagnostics
• To explain why volume
diagnostics are important,
let’s first consider
BINSORT data
• What can be concluded
from one die of BINSORT
data?
• Can anything be concluded
from this failing die BIN88
• How important is Bin 88
failures on this wafer?
• Is it a systematic failure?
© Synopsys 2013
13
Korea Test Conference 2013
Why Volume Diagnostics
• To understand it’s
importance and
characteristic, need more
data to make conclusion
Analysis of a statistically significant
• With the inclusion of other
volume of data provides
a better
dies on this
wafermap, it
becomes more clear
level of understanding
the
1. Binabout
88 is unlikely
a
systematic nor is it important
failing BIN on this wafer
failing population
2. Bin 68 is the most important
issue here and shows a
strong systematic signature
© Synopsys 2013
14
Korea Test Conference 2013
Why Volume Diagnostics
• Similar to BINSORT example, volume diagnostic
analysis of multiple dies/wafers/lots provides clearer
picture of the most important systematics on a sample
1 die diagnostics
No systematics
observable
Failing Net1
Increase analysis sample to 10 die diagnostics
Systematic becomes observable
with increased volume
Failing Net1
Failing Net2
Failing Net3
© Synopsys 2013
15
Korea Test Conference 2013
What is Volume Diagnostics?
• Volume diagnostics can describe any statistical treatment of
diagnostic data (both chain and logic)
• It can range from the simple to the extremely sophisticated
Basic Volume Diagnostics
• Manual parsing of diagnostic datalogs and data
manipulation
• Simple summing, sorting and filtering to identify
strong systematic signals
• Manual inspection of results
• Manual generation of coordinates for FA team
to localize defect
© Synopsys 2013
16
Full Automated Volume Diagnostics
• Automatic/semiautomatic prefiltering of bad diagnostic
data
• Analyzes data from multiple directions, with single or
multiple variable combinations
• Applies statistical tests and intelligent heuristics to
interpret and quantify results
• Aligns non-diagnostic data sources to enrich
understanding
• Generates tool files to drive FA equipment to likely
source of defects
Korea Test Conference 2013
Considerations during analysis
• Some important details should be considered in volume
diagnostics
– Should any data be removed prior to analysis?
– Are normalization required to interpret the data
– How important are the findings, in terms of overall yield impact
and statistical significance?
– Is there some supporting data to validate the findings?
– Is the problem new, or pre-existing?
– Are the results something that FA can reasonably isolate?
© Synopsys 2013
17
Korea Test Conference 2013
Automated Volume Diagnostics
• With Volume Diagnostics, we are usually trying to answer
specific questions. For example:
–
–
–
–
–
Is there a systematic metal or via location that is repeatedly failing?
Are there standard cells that are failing above their entitlement?
Are there scan chain that are consistently failing?
Is there a design or IP block that is failing above it’s entitlement?
What is the highest yield impact systematic on the analyzed
dataset?
– Is there a systematic lithography weakpoint associated with a
significant number of fails
– Were any of the failures observable inline?
• There are a large number of possible questions that can be
asked.
• A comprehensive and flexible system to quickly configure,
analyze large amounts of data, and direct the analysts to next
steps is necessary for a production volume diagnostic flow
© Synopsys 2013
18
Korea Test Conference 2013
Volume Diagnostics – Analysis
• For effective volume diagnostics, should minimally
provide:
– Identification of the systematic observation to it’s smallest
resolvable element
– Quantification of the systematics in terms of yield impact
– Statistical significance of the systematic
– Output information sufficient for failure analysis (wafer diexy and
within die coordinates) in a format easily consumed by FA labs
– Additional information to help FA teams isolate defects and/or
test/design/process teams to investigate possible fixes
© Synopsys 2013
19
Korea Test Conference 2013
Agenda
Current Challenges
Diagnostics vs Volume Diagnostics
Analysis Flows with Volume Diagnostics
Collaboration between Fab/Fabless
Conclusions
© Synopsys 2013
20
Korea Test Conference 2013
Volume Diagnostics
• What are some examples of volume diagnostic analysis results?
–
Design Based:
–
–
–
–
–
–
–
–
Process Based
–
–
–
–
–
Repeating nets or instances
Std Cell systematics
Design/IP block sensitivity
Routing pattern dependency
Scan Chain failures
Timing slack analysis
Voltage/temperature sensitivity
Spatial systematics
FEOL, Metal or Via layer systematic opens/shorts
Lot/Lot, Wafer to Wafer variability
Process equipment/history dependency
Test Based
–
–
Test Pattern dependency
Tester/Probecard dependency
• Or combinations of any of the above
© Synopsys 2013
21
Korea Test Conference 2013
Use Case: Which Nets Fail Systematically?
A net is a unique element
on a design. It only occurs
once out of possible 10s or
100s of millions of possible
nets on a design.
Repetitive failures on a net
indicate a strong
systematic signatures
© Synopsys 2013
22
Korea Test Conference 2013
Use Case – Are any std cells failing
systematically?
• Early in technology
development, FEOL
issues are prominent
• Important to evaluate
std cell failures to
characterize FEOL
systematics
© Synopsys 2013
23
Korea Test Conference 2013
Use Case – Are any std cells failing
systematically?
• Important to use design data to understand fail
entitlement to interpret results
#1 cell is actually failing
at random baseline
entitlement. What
appeared to be the #2
item is actually the
worst when comparing
gap vs entitlement
© Synopsys 2013
24
Korea Test Conference 2013
Entitlement Gap Discussion
• What is an entitlement gap?
– This just means that failures aren’t evaluated on an absolute
basis
– Unfortunately, there is no 100% yield
– There is always some baseline amount of failures expected. Our
failures need to be compared against the expected amounts to
properly conclude it is systematic
© Synopsys 2013
25
Korea Test Conference 2013
Some basic concepts
• How should we assess the effect of a factor?
• Let’s consider the following general case
Yield Loss for Factor X
What is the amount
of yield loss for Item
X
Is it ~30% ?
Item Number
© Synopsys 2013
26
Korea Test Conference 2013
Some basic concepts (cont’d)
• What if we had additional information about item X?
• For example, comparison against yield loss for other
elements for that variable?
Now, for item
20, what is the
interesting
quantity?
Yield Loss for Factor X
We can say that item 20 has a
20% yield loss above the
baseline entitlement of 10%
loss for mechanism X
Item Number
© Synopsys 2013
27
Korea Test Conference 2013
Some basic concepts (cont’d)
• Is this a reasonable way to look at yield loss mechanisms?
• Actually Yield/Product Engineers do this regularly
• Consider the familiar Bin Loss Pareto
But with the inclusion
of the baseline
entitlement, it is clear
that only Bin 68 is the
excursion, and the
amount is ~15%
From this data alone,
it would appear that
Bins 68, 6, and 41are
problematic at ~20%
yield loss
This bin pareto by itself isn’t that useful
© Synopsys 2013
28
But the inclusion of a reference to
understand what the bin losses should
be, can be used as the baseline
entitlement for each bin.
Korea Test Conference 2013
Entitled Bin Value
• From the previous example, what could explain why bin
6, 41 are high but not something that is necessarily
unexpected?
• Consider the situation, where binning is done by major
functional block within design
Bin 6
Coverage
Bin 41
Coverage
68
© Synopsys 2013
29
Imagine that Bin 6 and Bin 41 cover
functional blocks in large portions of
the chip, but Bin 68, covers this
smaller portion
In this case, if the three bins are
failing at the same rate, we would
suspect that Bin 68 failures have
some unique systematic
Korea Test Conference 2013
Gap Metrics
• General formula for gap of a mechanism
𝑮𝒂𝒑𝒊 = 𝑶𝒃𝒔𝒆𝒓𝒗𝒆𝒅𝒊 − 𝑬𝒙𝒑𝒆𝒄𝒕𝒆𝒅𝒊
• In this case
– Observed = measured from test, extracted by diagnostics,
expressed as % of total dies
– Expected = Entitlement quantity, also expressed in % of total dies
• Why Gap:
– Gap cannot exceed Observed Fail %
– i.e. if observed loss is 1%, even if fail rate is very high, gap cannot
exceed 1%. Ensures that focus is on high yield impact issues
© Synopsys 2013
30
Korea Test Conference 2013
Gap to Model – Basics
• Let us consider another familiar example
Device A
Area Device A = ½ area of Device B
YA
Both devices are designed
and manufactured in the
same technology (e.g.
28nm) in the same foundry
Device B
<
=
>
Do you expect the yields to be
the same or different?
YB
We know intuitively that the
larger die should yield less
© Synopsys 2013
31
Korea Test Conference 2013
Gap to Model – Basics
• Let’s look at another example of this concept
• Imagine we are a foundry. We are running 8 different
products in the same fab in the same process during
the same time period
• Yield Summary per device is as follows
What conclusion can
we make? Is there
some device here that
is not behaving
properly? What is the
missing information?
© Synopsys 2013
32
Korea Test Conference 2013
Gap to Model – Basics
• Let’s include area to see if that helps you come to some
conclusion
© Synopsys 2013
33
Korea Test Conference 2013
Gap to Model – Basics
• Based on the area of each device, can estimate an
expected yield using some yield model, defectivity rate
and area of each device
Now, it’s more clear
that device E is
misbehaving
© Synopsys 2013
34
Korea Test Conference 2013
Use Case – Are any std cells failing
systematically?
• Important to use design data to understand fail
diagnostic
entitlementA
to volume
interpret results
analysis tool should be
able to use design
normalizations and
generate expected
entitlements for proper
interpretation
© Synopsys 2013
35
Korea Test Conference 2013
#1 cell is actually failing
at random baseline
entitlement. What
appeared to be the #2
item is actually the
worst when comparing
gap vs entitlement
Volume Diagnostics – Yield
Normalization
In addition to design
normalization, important
to normalize results to
wafer yield
Does the
systematic
here have the
Volume diagnostic
same yield
impact on analysis should consider
both wafers?
the overall yield data on
Case A:
Case B:
the wafer
to understand
14/21 dies
14/21 dies
the truesystematic
yield impact of
the
systematic
systematic
In this wafer,
the effect of
this
systematic
has very large
yield impact
© Synopsys 2013
36
Korea Test Conference 2013
In this wafer,
the effect of
this
systematic
has very small
yield impact
Physical Verification
• Use Cases:
1. Overlay hotspots to failing diagnostic nets or instances
–
Localize failure to small point for long failing nets
Hotspot
This net would
be too long for
FA without any
additional
information
© Synopsys 2013
37
Korea Test Conference 2013
Overlay to litho
weakpoint
simulation hotspot,
narrows down
failure location to
very specific point
on one layer
DFM Hotspot Correlation
• In addition to helping FA, a statistical analysis is also
important to quantify effect of different hotspots rules on
diagnostics failures
• Various metrics such as hotspot fail rate, candidate hit rate,
etc. are calculated and visualized
© Synopsys 2013
38
Korea Test Conference 2013
DFM Hotspot Correlation
Fault location
from diagnostic log
Reported failing cell matches sensitive viabar hotspot location
Hotspot location
from hotspot file
© Synopsys 2013
39
Korea Test Conference 2013
Inline Defect Correlation
• Correlate inline defects with diagnostic candidates
• Various metrics such as hotspot fail rate, candidate hit
rate, etc. are calculated and visualized
© Synopsys 2013
40
Korea Test Conference 2013
Inline Defect Correlation
1. Use inline observed defects to narrow down source of
diagnostic failure
–
–
For long nets, FA might be difficult. If net is overlaid to an inline
defect, can go directly to that location on that layer to help FA
localize defect
For FEOL instances, can identify layer that may be source of
defect
2. Use inline observed defects to disqualify candidates
from FA
–
Source already identified inline, doesn’t need additional FA
characterization. Better for FA lab to spend time on finding new
defects. Skip FA on this candidate.
© Synopsys 2013
41
Korea Test Conference 2013
Case Study: Large fallout at Vddmin
• Problem: Large Vddmin fallout observed
• Solution: Automated Dft to Parametric correlation study performed
• Considerations
–
–
1000 cell x 100 parameters ~ 100,000 possible data pairs
Need an automated algorithm that searches through all pairs to find most
significant ones
Statistical test
automatically finds
significant pair of
results (Cell and
parameter)
•
Follow-on validation of this hypothesis by:
–
–
–
Analyze Split lots (transistor skew lots to validate this finding), and historical trends
Perform Simulations (verify if this parametric behavior could be related to diagnostic signal)
Perform FA (construction analysis to validate this signal)
© Synopsys 2013
42
Korea Test Conference 2013
Physical Verification
• Use Cases:
1. STA data alignment with failing instances
–
Use some static timing analysis results and assign timing slack to
failing transition faults
These large slack
candidates are
unlikely timing issues,
and are better
candidates for FA
These small slack
candidates are
likely slow path
related, and likely
mayhave no visible
defect
Without binning transition candidates by
slack, it is possible to confuse
mechanisms and generate many NDF
*Nelly Feldman, ST Microelectronics, Silicon Debug and Diagnostics Conference 2012
© Synopsys 2013
43
Korea Test Conference 2013
Use Case – Correlation to Memories
• Modern SOC enables us opportunity to use other
product data to help explain diagnostics
Leveraging correlated
results from bitmap
classification vs logic
diagnostics, we have
ability to
© Synopsys 2013
44
Korea Test Conference 2013
Use Case – Correlation to Memories
• Using bit classifications correlation to cell fail results from
diagnostics, we can attain better understanding about
correlated failures
•  In this example, these diagnostic FADDX1 cell
failures can be investigated by FA of single bit failures
© Synopsys 2013
45
Korea Test Conference 2013
Use Case – Via Analysis
• In this experiment, failures on Via12C were injected
above a background random via fail rate on all other
vias.
Note, vias that don’t have
significant affect on the
yield, will not show results
from this method due to
statistical significance
validation
© Synopsys 2013
46
Korea Test Conference 2013
Use Case – Via Analysis
• Finally, via fail rate values are converted through a yield
model into overall yield impact
Yield Model transformation
is necessary to understand
significance of result. A via
may have high fail rate but
low usage in design, in
which case, yield impact
many be small even with
high fail rates
© Synopsys 2013
47
Korea Test Conference 2013
Diagnostic Considerations
• Some things to consider when analyzing diagnostics
– Equivalent faults
– Correlated failures
– Diagnostics are heavily resource constrained
– Need to make more intelligent use of upstream data to
make diagnostics more targeted, biggest bang for the buck
© Synopsys 2013
48
Korea Test Conference 2013
Agenda
Current Challenges
Diagnostics vs Volume Diagnostics
Analysis Flows with Volume Diagnostics
Collaboration between Fab/Fabless
Conclusions
© Synopsys 2013
49
Korea Test Conference 2013
Volume Diagnostics Methodology
1000s of Likely FA Sites
• Statistically prioritize
the candidates from
multiple failing dies
Diagnostics
Timing
More data into volume
diagnostics, enables
List of Top 10 Sites for PFA
•
Localize
likely
failure
better characterization
sites by mask layer
Inline
LRC
DRC
and segment/Via
using correlations
LEF DEF
and
Layout
© Synopsys 2013
50
Korea Test Conference 2013
Data Used in Volume Diagnostics
From Design
Diagnostics
Call Outs
GDS
LEF /
DEF
BIN &
Parametric
Test
WET
WAT
E-Test
In-line
Defect
CFM
STA
In-line CD
Metrology
DRC,
Hotspot
OPC
Verification
From Fab
Required Data
© Synopsys 2013
51
Korea Test Conference 2013
Optional Data
Scenario 1: Independent
Access to LEF DEF is Assured
Diagnostics
Call Outs
LEF /
DEF
GDS
BIN &
Parametric
Test
WET
WAT
E-Test
In-line
Defect
CFM
STA
DRC+
Hotspot
RDR etc.
In-line CD
Metrology
OPC
Verification
At an IDM, or at Foundry for their test chip
© Synopsys 2013
52
Korea Test Conference 2013
Scenario 2: Foundry-Fabless
Fabless Customers Don’t Give LEF DEF to Foundry
From Design
Diagnostics
Call Outs
BIN &
Parametric
Test
LEF /
DEF
WET
WAT
E-Test
GDS
In-line
Defect
CFM
STA
In-line CD
Metrology
From Fab
© Synopsys 2013
53
Korea Test Conference 2013
DRC+
Hotspot
RDR etc.
OPC
Verification
Foundry-Fabless Collaboration
From Design
Diagnostics
Call Outs
LEF /
DEF
GDS
STA
DRC,
Hotspot
Yield Explorer
Secure
Snapshot
BIN &
Parametric
Test
WET
WAT
E-Test
In-line
Defect
CFM
In-line CD
Metrology
OPC
Verification
From Fab
Secure Snapshots protect the privacy of sensitive data on either side
© Synopsys 2013
54
Korea Test Conference 2013
TetraMAX + Yield Explorer
Faster Root Cause Analysis for Yield Ramp
• Enables analysis of silicon defects to
accelerate product ramp and increase yield
– TetraMAX diagnoses individual failing die for
defect locations
– Yield Explorer correlates these defects
across many failing die with physical design
and test data
Patterns
LEF/DEF
TetraMAX
(Diagnostics)
Candidates &
Physical Data
• Easy to deploy
– Support for industry-standard formats
(LEF/DEF and STDF)
– Direct interface between TetraMAX and Yield
Explorer
© Synopsys 2013
55
Korea Test Conference 2013
Yield Explorer
STDF
Agenda
Current Challenges
Diagnostics vs Volume Diagnostics
Analysis Flows with Volume Diagnostics
Collaboration between Fab/Fabless
Conclusions
© Synopsys 2013
56
Korea Test Conference 2013
Conclusions
• Design/process systematics are becoming worse at
advanced nodes
• Volume Diagnostics enables better and faster analysis
and FA turnaround
• Many analysis flows enabled with volume diagnostics
• Collaboration between fabless and foundry required for
complete analysis
• YieldExplorer with Tetramax provides complete platform
for volume diagnostic analysis
© Synopsys 2013
57
Korea Test Conference 2013
Download