Presentation

advertisement

Modelling the CRM for the Correlation Trading

Portfolio

Dherminder Kainth, Jan Kwiatowski & Douglas Muirden

Royal Bank of Scotland

May 19, 2010

Agenda

Regulatory Requirements

 Challenges in Meeting Regulatory Requirements

RBS Approach to CRM Calculation

Modelling Approaches and Assumptions

Price Risk

Simulation of Market

Default Risk

 Appendix

Computational Implementation of CRM

2

Regulatory Requirements

The All Price Risk Measure represents a special form of the Incremental Risk Charge, described in 7.10.55S R (1) for positions in the correlation trading book

 The “ All Price Risk Measure ”, must

Adequately capture all price risks at the 99.9% confidence interval over a capital horizon of one year

Under the assumption of a constant level of risk

And be run at least weekly

Price risk measures include:

Defaults, including the ordering of defaults;

Credit spread risk;

Volatility of implied correlations, including the cross effect between spreads and correlations;

Index to single names basis and implied correlation of an index to bespoke portfolios basis;

Recovery rate volatility;

Risk dynamic hedging and the cost of rebalancing;

• Though interest rate and foreign exchange risk have not been explicitly mentioned, we consider this to included in “All Price

Risk”

3

Agenda

 Regulatory Requirements

Challenges in Meeting Regulatory Requirements

RBS Approach to CRM Calculation

Modelling Approaches and Assumptions

Price Risk

Simulation of Market

Default Risk

 Timing and Next Steps

 Appendix

Computational Implementation of CRM

4

Naive Implementation of CRM

 Naively implementing the CRM i.e., computing the 99.9% worst loss on a 1 year horizon on RBS’s entire correlation trading portfolio is very difficult

For example, using Monte Carlo, we would need to evolve the market forwards in time, pricing and hedging the portfolio as per the desk, tracking P&L over a 1 year horizon

A back of the envelope calculation immediately reveals the high likelihood of failure:

Trade Type

Bespoke

Nth To Default

Trade Count

~500

~500

Computation

PV

Parallel CR01

Index CDS

Index tranches

~3000

~3000

Default Delta

Recovery Delta

Single name CDS c. 75,000 Base Correlation Sensitivity

Figure 1: Numbers and types of trades in our portfolio along with representative times to compute PV and risks for one trade on one computer

Timing (seconds)

~10

~200

~60

~200

~60

5

Naive Implementation of CRM (cont’d)

Assuming a rehedging frequency of once every month and a grid of 300 computers and the minimum number of paths to compute the 99.9% confidence limit (i.e., 1,000) we see that we would need ~ 2,600 hours to compute results for just the bespoke CDOs

Recalibration of the market (which needs to happen for every valuation and hedging time point) adds substantially to this timing

Over the next few slides we highlight:

How one might address the core issue of computational intractability

Issues in simulating the market

Subjectivity of hedging

We will in effect pose a series of questions; the decisions that we have made form the basis of the RBS approach to computing the CRM. This will be discussed in more detail in the following section.

6

Possible Areas of Optimisation: Pricing Algorithms

Choice of Algorithm

Can we use convolution ?

Importance sampling for the Monte Carlo

Replace Recursion ?

 The 1 factor Gaussian copula (with random recovery) is very popular because rapid computational schemes exist. The ASB (or variants thereof) algorithm is commonly used in the industry because it returns (quasi exacts) PV and risks rapidly.

Faster pricing approaches are well known in the literature; however, these are to some extent (uncontrolled) approximations to the true price.

– LHP (Large Homogeneous Portfolio)

– Conditional Gaussian approach (Shelton)

– Saddlepoint Methods

– Stein

Choice of scheme depends on counterplay between accuracy and speed

Optimisation of the Implementation

Parallelisation of the code - currently valuation and risks are computed on a grid. Buy more computers?

Performance of grids do not necessarily scale linearly - data passing is a limiting factor

Front office pricing code focuses on accuracy: potential speed ups by for example reducing tolerances whilst maintaining high levels of accuracy

Rewriting time critical parts of the code in the assembler?

7

Possible Approaches: Changing Mapping Approaches

When pricing bespoke tranches within a copula based model, we apply mapping technologies to determine the base correlation for the bespoke - this reflects the different riskiness of the bespoke tranche relative to the index

 Loss Fraction (“LF”) mapping (the approach used by RBS and much of the industry) is slow - it requires the inversion of prices to determine correlations

 Consider the use of a faster mapping technique such as At the Money (“ATM”) mapping

RBS front office uses LF mapping to risk manage their correlation book

LF deltas differ from ATM deltas

Valuing current portfolio and hedges using ATM mapping rather than LF, will make it appear unhedged

• If we use ATM mapping, we would need to modify RBS’s current portfolio to achieve the same “level of risk” as per LF mapping and then apply a different mapping technique

8

Possible Approaches: Changing Mapping Approaches

Figure 2: Mapping iTraxx9 to CDX9 using ATM mapping and LF mapping

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

0.0%

A TM M apping

10.0% 20.0%

LF mapping

30.0%

Strike

CDX S9

40.0% iTraxx S9

50.0% 60.0%

Figure 3: Mapping iTraxx9 5Y to 7Yusing ATM mapping and LF mapping

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

0.0% 5.0%

A TM mapping

10.0%

LF mapping

15.0%

Strike iTraxx S9

20.0% 25.0% 30.0%

We demonstrate the effect of the different mapping approaches in two scenarios:

Figure 2 shows the effect of the ATM and LF mapping, when mapping iTraxx S9 to CDX S9. Due to the important differences between the two indices, none of the considered mapping methods produces satisfactory results. However, we note that the LF mapping shifts the market correlation curve in the right direction (as opposed to the ATM mapping)

Figure 3 shows the effect of the ATM and LF mapping, when mapping iTraxx S9 5 year to 7 year . The two mapping methods produce similar results, with slightly higher correlation values for the LF mapping

9

Subjectivity in Hedging

Typically traders hedge a position in a CDO tranche [a, b] using primarily the constituent CDSs and the index, and sometimes with an additional tranche [l, u]

Delta hedging movements in the Single Name CDS

Delta-hedging movements in the index

Delta and gamma hedging movements in the index

Hedging parallel shifts in correlation

Hedging default risk

Regression based hedging

Traders are free to use some/other of the strategies outlined above; the choice will change depending on market conditions and trader outlook

Algorithmically predicting the hedging strategy is therefore very difficult

Hedging is computationally expensive; furthermore it is very subjective and implementing only a simplistic approach will give rise to greater slippages

10

Simulation of the Market

Simulating the universe of observed prices relevant to the CDO book forwards by periods up to one year is challenging

We need to model possible movements in yield curves and FX rates

There is a need to capture the dynamics of the market implied CDS spreads to model the price risk. Desiderata for the evolution of the CDS spreads include:

Impact of rating migrations (jumps?)

Empirical co-dependence between CDS spreads shows regional and sectoral variation

Co-dependence between CDS spreads is time dependent - showing regime like behaviour

Level dependent volatility

Modelling the index tranche market is, if anything, even more challenging

The observed index tranche market comprises:

CDX

5, 7, 10 Years

Attach Detach

0%

3%

3%

7%

7%

10%

15%

30%

(initial maturity shown)

10%

15%

30%

100% iTraxx

5, 7, 10 Years

Attach Detach

0%

3%

3%

6%

6%

9%

12%

22%

9%

12%

22%

100%

Attach

0%

10%

15%

25%

35%

High Yield

5 Years

Detach

10%

15%

25%

35%

100%

Given the occurrence of defaults, some of these detachments have changed - e.g., for the high yield the original (0,10%) tranche has been completely wiped out

11

Simulation of the Market

Typically these index tranche prices are mapped into base correlations using the (random recovery) Gaussian copula. In simulating the market forwards in time, we need to evolve the price / correlation surface

Can we evolve correlations e.g., additively?

Pretty clear that correlations are bounded between (0, 1)

However, the problem is far more subtle than this: it rapidly becomes clear that an arbitrary set of correlations do not describe a valid set of prices

Applying historical moves in base correlation to the current base correlation curve can lead to arbitrage situations, for example, negative tranche spreads

In the following graphs, the 3 month move in base correlations from September 2008 to December 2008 is applied to the current base correlation curve to obtain a shifted correlation curve

As can be seen from the graph on the bottom left, the resulting shifted base correlation curve results in tranche spreads which eventually become negative

12

Evolving correlations can lead to arbitrage opportunities

Figure 4 - Historic Base Correlation Moves (iTraxx 5y) Figure 5 – Historic change applied to spot

Figure 6 - Base Correlation  Tranche Prices Figure 7 - Base Correlation  Tranche Prices, zoomed in

13

Simulation of the Market (cont’d)

For the prices of index tranches to be admissible (i.e., for the absence of arbitrage) a set of strong conditions (that have effectively never violated for the market quoted points) must hold.

Typically these conditions are expressed in terms of the ETL (Expected Tranche Loss), denoted here by:

Intuitively, this is just the price of a European (capped call) option on the loss (More formally we define it as the expected loss on an equity tranche of width K at time T, as seen from time 0).

A number of boundary conditions are immediately apparent:

An equity tranche cannot lose more than its width i.e.,

To ensure no arbitrage, the density of the loss distribution must be non-negative for all strikes and times. The ETL is just a normalised price of a call option on the loss; hence the non-negativity of the loss density implies that:

Losses cannot be reversed - hence the ETL of an equity tranche must be a constant or increasing function of T

14

Agenda

 Regulatory Requirements

Challenges in Meeting Regulatory Requirements

RBS Approach to CRM Calculation

Modelling Approaches and Assumptions

Price Risk

Simulation of Market

Default Risk

 Appendix

Computational Implementation of CRM

15

RBS Approach – Disaggregation of CRM calculation into Default and Price Risk

Issues with Simulation

1.

Unfeasibly large number of computations required to estimate 99.9

th percentile.

2.

Calculating hedges is computationally very expensive

3.

Hedging strategy is very subjective – dependent upon the market and trader’s view of the future

Definition of Price & Default Risk

• We term Price Risk to be the impact on the portfolio of all moves in the market except for a default event;

Default Risk is defined to be the impact on portfolio value of default events

• Default events are irreversible ; price moves are reversible. Names cannot come back out of default

• Different time horizons for Price Risk and Default Risk:

We can hedge price risk

– hence the time horizon for price risk is dependent on hedge frequency (days to 1 month)

Defaults have a longer natural timescale – number of defaults in 1 month is minimal

RBS chosen approach: Evaluate Price Risk and Default Risk separately, then aggregate to obtain CRM

Constant level of risk allows convolution of Price Risk (up to 1 month for re-hedging) cf.

IRC

Reduces number of computations required for Pricing Risk

Removes need for extensive computation of sensitivities and reduces subjectivity in choice of hedging algorithm

Need to evaluate default risk separately – defaults are irreversible. Use Monte Carlo for default risk.

Enables development of an importance sampling algorithm for Default Risk

Is more conservative: double counts defaults combined with large spread move scenarios

16

Price Risk - Constant Level of Risk

Mathematically speaking, the constant level of risk assumption translates to assuming an identical loss distribution after each time interval,

D

, corresponding to 1/Hedging frequency

 i.e., after every hedge interval we are able to rehedge such that the overall riskiness of RBS’s portfolio is identical to today’s level

Assume

D

= 1Month. Then the constant level of risk P/L distribution over 1 year is the convolution of 12 copies of the 1 Month

P/L distribution. This is very powerful:

We do not need to compute actual hedges, just monthly P/L.

Convolution allows us to get easily into the tail i.e., to estimate 99.9%

This leads to significant savings in time

– the computation becomes feasible without the need to move away from our books and records valuation approaches (i.e., CRM and desk approaches are consistent)

Removes the subjectivity in choice of hedging approach

Obviously convolution cannot be used for defaults (names that default over a month would need to come back out of default !)

17

Constant Level of Risk - Convolution

18

Convolution lets us get into the tails!

19

Price Risk – RBS Algorithm

1. Choose time-horizon over which portfolio could be re-hedged (2 – 4 weeks)

2. Simulate Market (index tranches, single name CDS yield curves, basis etc) over hedging interval using our historical simulation algorithm (see below)

3. Compute P/L over this period; repeat ~200 – 500 times to compute a distribution

Use pricing technologies consistent (essentially identical analytics) with those used for books and records valuations

4. Use stressed market scenarios and probability weight (see below) to compute the full 1M P/L distribution.

5. Convolve N times ( N = 12 if hedge frequency = 1M) to obtain full P/L distribution over 1Y

• The use of convolution implies the absence of autocorrelation i.e., the 1M P/L distribution is uncorrelated with next month’s P/L distribution

We will quantify this by examining the impact on price risk of changing the hedging horizon

From a final number perspective, the impact of autocorrelation will be captured via the use of stressed starting scenarios

20

Stressed Starting Scenarios

More significantly, however, the constant level of risk assumption implies that (at the end of each hedging interval, despite significant market moves) we are able to re-hedge our CDO portfolio to the same level of riskiness as today

This is a strong assumption. We therefore aim to apply an approach similar to that used for the IRC, where we use stressed starting scenarios

Algorithmically:

Choose 5 starting scenarios i.e., the market is in one of 5 starting scenarios (each with a weight – the Gauss

Hermite weight).

– Our CDO positions will only be partially hedged to this scenario; the cost of this partial hedging will be part of the final

P/L distribution

– The starting scenarios will correspond to dates on which the iTraxx, CDX and HY indices assumed the values implied by the Gauss Hermite percentiles

Hedging

The market is then evolved as per the algorithm above; the total loss distribution for 1M is computed, accounting for the impact of the stressed scenarios

Allow partial (risk based) re-hedging of book when switching to stressed scenarios

Model the relevant cost of re-hedging

– based on applicable market bid/offers but also by including a liquidity premium

21

Stressed Starting Scenarios

Choose stress scenarios to be market on particular days in our history.

Proxy stress events by absolute level of iTraxx spread levels

Choose days in history corresponding to stress events by finding days when quantile of the index matches the probability levels implied by Gauss

Hermite.

22

Agenda

 Regulatory Requirements

Challenges in Meeting Regulatory Requirements

RBS Approach to CRM Calculation

Modelling Approaches and Assumptions

Price Risk

Simulation of Market

Default Risk

Appendix

Computational Implementation of CRM

23

Price Risk: Simulation of Market Variables

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss Fractions

Typical Approach in the Industry:

Choose a stochastic differential equation (SDE) to describe the market data parameter (e.g., FX) that we wish to simulate

Immediately introduces model dependence .

Estimate the parameters of the SDE (Kalman Filtering)

Simulate the SDE forwards to generate a possible future time series

Issues

– why don’t we do this?

Strong model dependence – if we estimate a market using a diffusion, we will never predict any jumps!

Estimation dependent upon quality of history

Very difficult when we want to simulate a group of inter related variables (e.g. spreads, yield curves, FX, rates) consistently

Estimation very difficult in the multidimensional case!

Typically attempt to capture codependence using static correlation; real codependences are far more complex

– time dependent and show regimes

Such an approach will struggle to preserve the shapes of curves (e.g. yield curves)

24

25

Price Risk – Simulation of Market Variables

Δ t,t+1 market

= { Δ t,t+1 spreads

, Δ t,t+1

FX

, Δ t,t+1

YC

, …}

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss Fractions

History t t

Δ 01 market

Δ 12 market

Apply

H( Δ 01 market

) t

1 t

2

Δ 23 market t

3 t

4

Jump in history

H( Δ t,t+1 market

)

...

Simulation

Today’s

Market

Derive a time-series of intra-period changes in market variables (FX, Interest rates, etc.)

Historic changes can not be applied to current data directly

– define transformation function

H()

Apply changes as well as sign-reversal : drift is random, directional correlations are preserved.

Change sign of the entire market.

25

Price Risk

Simulation of Market Variables

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss Fractions

RBS uses the Mahal, Rebonato et el. approach:

Apply sequence of historical market changes to current market

Starting date is randomly chosen

Dates of selected changes must agree across all risk drivers.

Randomly jump from sequence to a new date

Randomised trend reversal

Preserves directional inter-dependence (so, no need to model correlations etc.).

Historical changes must be applicable to current market

• e.g. if current spread is 10bps, not realistic to apply ±100bps historical change

Transform risk drivers: y = H [ x ] y sim

= y today

+

D y hist x sim

= H -1 [ y sim

]

• e.g. for proportional changes: H [ x ] = ln ( x )

(we use this transformation for FX rates)

• Historical changes should look like ‘white noise’ (not dependent on current market)

26

Price Risk

Market Simulation – Yield Curves

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss Fractions

Individual rates

Simple CEV-type transformations:

H [ x ]

 s

( x x )

where s

( x )

 s

To be calibrated:

, x

L

, x

R

, C

 We also have ‘band reversion’ parameters, s

1

 x s x

L

C .( x

 x

R

)

 if if if x x

[ 0 ,

[ x

L

, x

L x

R

]

] x

[ x

R

,

]

CEV transformation but not necessary for 1-month changes

Also necessary to check shape of simulated curves

 See Mahal et al. ‘Barbell’ effects, shape reversion, etc.

Again, not a problem for short time-horizons

Source: RBS x

L x

R

Rate, x

27

Price Risk

Market Simulation – Spreads

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss Fractions

General Approach

The history of individual names not necessarily relevant to modelling spread dynamic of the same name today (e.g. Ford)

For obligors that have experienced downgrades or corporate actions, a direct map to its spread history and historical spread change would be unrepresentative of the behaviour it is likely to exhibit today

For any date, bucket names by sector and spread percentile range

For each path (start-date) randomly map each name into a name in the same historical bucket.

Apply the corresponding changes from the mapped name.

Introduces more randomness and therefore a wider range of plausible outcomes

Preserves correlations across an industry

Captures cross-gamma risk concentrated by name

Transformation

Spread Mapping Exercise

B

Energy

Industrial Sector

C

Metals

Simplest model would be H [ x ] = ln( x )

(as used in Regulatory Stress Test)

However, we would expect some dependency on current levels of spreads

Maybe similar to Interest Rates

This is work in progress

100

90

80

A

Auto

K

Telco

20

10

0

Source: RBS

28

Price Risk

Market Simulation – Index Loss Fractions

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss

Fractions

General Considerations

Need a different parameterisation of index tranche prices beyond correlation

Simulated prices must be non-arbitrage-able across detachment points and across maturities

RBS is in the process of testing two alternative models, both involving Index Loss Fractions

(“ILFs”)

ILF t

E [max(

E [

L t

L t

]

K , 0 )]

ILFs are effectively the ratio between the Expected Tranche Loss for an equity tranche with strike K to the total expected loss (EL) of the index (i.e., expected tranche loss on an equity tranche with strike

0).

Index Loss fractions

– underlie loss fraction mapping

RBS first simulates single name CDS spreads and the basis – we can therefore compute EL.

We then propose to simulate the ILFs (i.e., the above ratios), and then convert to tranche price

29

Price Risk

Market Simulation – Loss Fractions Bounds

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss

Fractions

Loss Fraction Bounds

ILFs for any maturity need to be concave functions of detachment point. We model changes so that simulated ILFs automatically have this property

Simulate equity tranche, and for successively senior tranches find lower and upper bounds for the

ILFs, say LB and UB

 Define tranche ‘Theta’ as the following ratio: q 

ILF

UB

LB

LB

[ q]

(must be between 0 and 1)

Clearly we cannot just add q  given the bounds); instead map q onto the range (-

,

) using inverse normal cumulative distribution, say:

S =

F

-1 [ q

]

Additive changes in S will therefore always be valid. Are we done?

30

Market Simulation – Loss Fractions Bounds

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss

Fractions

0-x

0-x

0-3

0-3

0-7

0-7

Y-axis: base correlation

X-axis: detachment point

Right hand graphs show magnified view of corresponding left hand graph

31

Price Risk

Market Simulation – Loss Fractions Bounds

Simulation of the

Market

Yield Curve

Single Name

Spreads

Index Loss

Fractions

Let us look at a plot of Historical S ’s

Figure 8 shows a plot of changes in S for 5-year CDX versus Index Expected Loss

There is clearly a pattern: as we go to higher expected loss the range over which S can vary decreases

This effect appears more significant in the data than it is – fewer data points for larger EL

Hence S is not a good quantity to simulate

Figure 9 shows the impact of scaling S by Expected Loss: ie Z = S *G{EL)

G{.} is calibrated to different indices and maturities

No pattern i.e., apply historical changes in Z to today’s market

Figure 8 Change in S Versus Expected Loss - Unscaled

6.0

4.0

2.0

0.0

(2.0)

(4.0)

(6.0)

(8.0)

0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0%

Figure 9 Change in S Versus Expected Loss - Scaled

6.0

4.0

2.0

0.0

(2.0)

(4.0)

(6.0)

(8.0)

0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0%

Source: RBS

32

Agenda

 Regulatory Requirements

Challenges in Meeting Regulatory Requirements

RBS Approach to CRM Calculation

Modelling Approaches and Assumptions

Price Risk

Simulation of Market

Default Risk

 Appendix

Computational Implementation of CRM

33

Default Risk

Summary Schematic – Explanation

Simulating Defaults

Simulate defaults over a 1-year liquidity horizon

Approach employs same PD/Default correlation structure as IRC

Through-the-cycle (i.e., long term) PDs based on, for example, historically experienced default rates

• However, we don’t know what stage of the credit cycle we will be in 1 year in the future. Hence we need to stress these PDs.

Use a Merton firm value (Gaussian copula) type approach – familiar from IRC as described by the IRB.

Stress the common factor to give default correlation/contagion effects

Non-default spreads driven by the same systematic effects

We need to integrate over systematic effects (use Gauss-Hermite if 1 factor model, Monte Carlo if multi factor)

Recovery rates randomised (driven by systematic effects)

 Therefore we have a set of defaulted names (defaulted as per “real world” dynamics) and the times of default up to 1 year

Valuation

Given a set of defaults over 1 year, we would expect that the spreads of the non defaulted firms will have changed:

If we do not allow contagion, FTD baskets would always make money on a default

We need to know the form of the entire market – index tranche prices, CDS spreads, FX, yield curves, basis etc. We propose to do this by using the value of the common factor to pick out dates where the empirical cumulative probability of the Itraxx /

CDX index level corresponds to the cumulative probability of the common factor.

34

Default Risk Detailed Explanation

Then we

Revalue the portfolio under the given market scenario incorporating randomized recoveries and defaults and spreads blown out (V

1

)

Revalue the portfolio under the given market scenario incorporating spreads blown out but with no defaults (V

2

).

Default p/l = V

1

-V

2

The impact of price risk is already captured

A series of default events will cause the spread environment to change (possibly markedly). The aim is to capture this cross effect – these products are nonlinear!

Tail risk identified by 2-stage estimation process (Importance sampling)

1.

Large number of simulations (10,000) using approximate revaluation

Select subset (1,000) giving largest approximate losses

2.

Compute corresponding losses using exact revaluations

Find appropriate tail average of these

35

Default Risk - Modelling Contagion Effects

36

Optimisation – Improving on Stein ?

The 1 factor Gaussian copula (with random recovery) is very popular because rapid computational schemes exist

All such schemes are predicated on the fact that after conditioning on the common factor credits become conditionally independent

The standard approach - the so called ASB algorithm - computes this conditional loss distribution using recursion and is essentially exact

Various approximations

–all of which seek to approximate this conditional loss distribution exist

Probably the most accurate approach in the literature is an application of the Stein approximation (El Karoui,

2008)

We have implemented Stein and extensively investigated its use for this problem. We have also developed an alternative (novel – i.e., not seen in the literature) Poisson approximation

Both approaches are significantly quicker than standard recursion (factor ~3)

Both methods have been compared with Random Recovery Recursion on actual Index Tranche and Bespoke portfolios, for a range of:

– Spread Scenarios

– Correlations

– Maturities

– Attachments / Detachments

– Our testing has encompassed stress events such as those produced by our market simulation.

37

Normal Approximation

• The conditional loss distribution is bounded between 0 and the (factor-dependent) maximum loss. When portfolio expected loss is not too low or high the loss distribution can be close to normal.

• Otherwise, however, the distribution can accumulate at either extreme and the normal approximation deteriorates.

• The figures below compare a 100-name homogeneous loss distribution with its approximating normal for different levels of expected portfolio loss. Extreme low or high expected losses will always arise since we are integrating across the market factor.

• (Note that these figures are qualitative comparisons only, where discrete distributions are normalized by grid size. The tranche prices themselves give the true quantitative comparison.)

45

40

35

30

25

20

15

10

5

0

0

Normal Approximation : Expected Portfolio Loss=0.01

0.05

0.1

0.15

Exact normal

0.2

10

8

6

14

12

4

2

0

0

Normal Approximation : Expected Portfolio Loss=0.10

0.05

0.1

0.15

Exact normal

0.2

10

9

8

7

6

5

4

3

2

1

0

0.2

Normal Approximation : Expected Portfolio Loss=0.30

binomial normal

0.25

0.3

0.35

0.4

38

Standard Poisson Approximation

• A poisson distribution is a natural approximation to the true conditional loss distribution when expected losses are low.

• The figures below compare the same 100-name homogeneous distribution with the usual poisson approximation. As portfolio expected loss increases the accuracy deteriorates.

• The range of accuracy of the poisson and normal are complementary so a threshold for expected loss can be specified at which the approximation changes from poisson to normal. For the example here this would typically be set around 0.10 to 0.15.

• If recoveries are inhomogeneous however the distribution will be sparse with a small loss unit or grid size, and the standard poisson approach becomes problematic.

45

40

35

30

25

20

15

10

5

0

0

Std Poisson Approximation : Expected Portfolio Loss=0.01

0.05

0.1

0.15

binomial std poisson

0.2

10

8

6

14

12

4

2

0

0

Std Poisson Approximation : Expected Portfolio Loss=0.01

0.05

0.1

0.15

binomial std poisson

0.2

10

9

8

7

6

5

4

3

2

1

0

0.2

Std Poisson Approximation : Expected Portfolio Loss=0.01

0.25

0.3

0.35

binomial std poisson

0.4

39

Adjusted Poisson Approximation

• The standard poisson approximation uses the same loss grid as the true distribution. If instead we allow the approximating poisson to have its own loss unit we have an extra parameter and a more flexible approach.

• At low expected losses, the adjusted poisson is very similar to the standard poisson and the grid size is very close to that of the true distribution (homogeneous in this example).

• As expected loss increases the grid size decreases and the adjusted poisson smoothly changes over to be very close to normal.

45

40

35

30

25

20

15

10

5

0

0

Adj Poisson Approximation : Expected Portfolio Loss=0.01

0.05

0.1

0.15

binomial adj poisson

0.2

14

12

10

8

6

4

2

0

0

Adj Poisson Approximation : Expected Portfolio Loss=0.01

0.05

0.1

0.15

binomial adj poisson

0.2

5

4

3

2

1

0

0.2

10

9

8

7

6

Adj Poisson Approximation : Expected Portfolio Loss=0.01

0.25

0.3

0.35

binomial adj poisson

0.4

40

Comparison (Poisson vs Stein)

Price Differences versus Random Recovery Recursion: 0 – 3% Tranche

Pricing Error: 0-3% Tranche 5Y ( CDX9 portfolio )

6.00%

5.00%

4.00%

3.00%

2.00%

1.00%

0.00%

0.1

0.3

0.6

1.0

spread scaling facor

1.4

1.8

4.0

8.0

5%

-1.00%

15%

25%

35%

45%

55%

65%

75%

85%

95% flat correlation

AssetLegPV Rel Error

(Stein - FullRec)

5.000%-6.000%

4.000%-5.000%

3.000%-4.000%

2.000%-3.000%

1.000%-2.000%

0.000%-1.000%

-1.000%-0.000%

Pricing Error: 0-3% Tranche 5Y ( CDX9 portfolio )

6.00%

5.00%

4.00%

3.00%

2.00%

1.00%

0.00%

0.1

0.3

0.6

1.0

1.4

spread scaling factor

1.8

4.0

8.0

5%

-1.00%

15%

25%

35%

45%

55%

65%

75%

85%

95% flat correlation

AssetLegPV Rel Error

(Poisson - FullRec)

5.000%-6.000%

4.000%-5.000%

3.000%-4.000%

2.000%-3.000%

1.000%-2.000%

0.000%-1.000%

-1.000%-0.000%

Stein Poisson

41

Comparison (Poisson vs Stein)

Price Differences versus Random Recovery Recursion: 9 – 12% Tranche

Pricing Error: 9-12% Tranche 5Y ( CDX9 portfolio )

10.00%

5.00%

0.00%

-5.00%

-10.00%

-15.00%

-20.00%

-25.00%

-30.00%

-35.00%

0.1

0.3

0.6

1.0

spread scaling facor

1.4

1.8

4.0

8.0

5%

-40.00%

15%

25%

35%

75%

85%

95%

45%

55%

65% flat correlation

AssetLegPV Rel Error

(Stein - FullRec)

5.000%-10.000%

0.000%-5.000%

-5.000%-0.000%

-10.000%--5.000%

-15.000%--10.000%

-20.000%--15.000%

-25.000%--20.000%

-30.000%--25.000%

-35.000%--30.000%

-40.000%--35.000%

Pricing Error: 9-12% Tranche 5Y ( CDX9 portfolio )

10.00%

5.00%

0.00%

-5.00%

-10.00%

-15.00%

-20.00%

-25.00%

-30.00%

-35.00%

0.1

0.3

0.6

1.0

spread scaling factor

1.4

1.8

4.0

8.0

5%

15%

-40.00%

25%

35%

45%

55%

65%

75%

85%

95% flat correlation

AssetLegPV Rel Error

(Poisson - FullRec)

5.000%-10.000%

0.000%-5.000%

-5.000%-0.000%

-10.000%--5.000%

-15.000%--10.000%

-20.000%--15.000%

-25.000%--20.000%

-30.000%--25.000%

-35.000%--30.000%

-40.000%--35.000%

Stein Poisson

42

Comparison (Poisson vs Stein)

Price Difference Comparison for all Bespoke Tranches

Stein vs Full Recursion (Bespoke tranches)

80%

70%

60%

50%

40%

30%

20%

10%

0%

-1

.0

0%

-0

.9

3%

-0

.8

7%

-0

.8

0%

-0

.7

3%

-0

.6

7%

-0

.6

0%

-0

.5

3%

-0

.4

7%

-0

.4

0%

-0

.3

3%

-0

.2

7%

-0

.2

0%

-0

.1

3%

-0

.0

7%

0.

00

%

0.

07

%

0.

13

%

0.

20

%

PV diffe re nce as % of Notional

0.

27

%

0.

33

%

0.

40

%

0.

47

%

0.

53

%

0.

60

%

0.

67

%

0.

73

%

0.

80

%

0.

87

%

0.

93

%

1.

00

%

Poisson vs Full Recursion (Bespoke tranches)

80%

70%

60%

50%

40%

30%

20%

10%

0%

-1

.00

%

-0

.93

%

-0

.87

%

-0

.80

%

-0

.73

%

-0

.67

%

-0

.60

%

-0

.53

%

-0

.47

%

-0

.40

%

-0

.33

%

-0

.27

%

-0

.20

%

-0

.13

%

-0

.07

%

0.0

0%

0.0

7%

0.1

3%

0.2

0%

0.2

7%

PV diffe r e nce as % of Notional

0.3

3%

0.4

0%

0.4

7%

0.5

3%

0.6

0%

0.6

7%

0.7

3%

0.8

0%

0.8

7%

0.9

3%

1.0

0%

43

Default Risk Approximation

Default Risk modelling is time consuming

Under full revaluation - each time a default occurs, the model must, for each trade:

– Remove defaulted name from portfolio

– Calculate expected recovery of defaulted name

– Calculate adjusted portfolio expected loss

– Iteratively Re-calibrate loss fraction curve(s) based on new portfolio expected loss

– Re-price adjusted tranche using new attachment and detachment points

In scenarios where we are simulating a number of defaults occurring (i.e. tail risk), computation times increase dramatically

44

Default Risk Approximation – PV Interpolation

The time consuming step in calculating the PV impact of defaults is the recalibration of the tranche loss fraction curve required for the new portfolio after defaulted names have been removed

For calculation of the PV of a tranche on a portfolio that has experienced defaults, this recalibration step can be circumvented if we keep the portfolio the same, but readjust the tranche attachment point by the loss amount:

100%

6%

Tranche

5%

Simulate

Defaults

0%

100%

Recovery

100%

95%

100%

6%

Tranche

5%

Calculate PV

Impact

Tranche

6%

5%

Loss

Tranche

6.0%

5.5%

4.5%

Default

1%

0%

Loss

0.5%

0%

Loss

0.5%

0% 0%

Full Reval PV Interp

Operationally, for each trade portfolio, the (mid spreads and durations of) 15 tranches beneath the attachment point of the original tranche are pre-calculated

These tranches are of the same tranche thickness as the original transaction

The specific pre-calculated tranches depend on the original tranche attach and detach

Based on the simulated number of defaults, a loss amount is calculated and the corresponding loss in subordination of the original tranche is calculated

The PV of the defaulted tranche is calculated based on interpolating the subordination adjusted curve against the precalculated tranches

45

Download