Uploaded by Nelson Mrewa

BF 410 MODULE15.4.2020

advertisement
BF 410
For
Bachelor of Commerce in Banking and Finance
in the
Faculty of Commerce
at the
Midlands State University
Harare Campus
Feb/March 2020
TABLE OF CONTENTS
LIST OF TABLES..................................................................................................................?
LIST OF FIGURES...............................................................................................................?
LIST OF ABBREVIATIONS..................................................................................................?
-i-
LIST OF DEFINITIONS.........................................................................................................?
1.
INTRODUCTION AND OVERVIEW OF RISK ........................................................................ 1
1.1
THE CONCEPT OF RISK ................................................................................................ 1
1.1.1
1.2
THE RISK MANAGEMENT PROCESS ........................................................................... 3
1.3
RISK CLASSES............................................................................................................... 6
1.4
RISK APPETITE FRAMEWORK ..................................................................................... 7
2
2. INFAMOUS RISK MANAGEMENT DISASTERS ............................................................. 12
2.1
METALLGESELLSCHAFT REFINING AND MARKETING (1993) ................................. 12
2.2
ORANGE COUNTY (1994) ............................................................................................ 13
2.3
BARINGS ...................................................................................................................... 14
2.4
DAIWA .......................................................................................................................... 14
2.5
LONG-TERM CAPITAL MANAGEMENT (1998). ........................................................... 14
2.6
FACTORS CONTRIBUTING THE 2007 GLOBAL CREDIT CRISIS............................... 15
2.6.1
Relaxed Lending Standards ................................................................................... 15
2.6.2
The Housing Bubble .............................................................................................. 16
2.7
3
LESSONS FOR FINANCIAL RISK MANAGEMENT ...................................................... 17
3. GREEK LETTER.............................................................................................................. 19
3.1
DELTA ........................................................................................................................... 19
3.1.1
Option Delta ........................................................................................................... 20
3.1.2
Dynamic Aspects of Delta Hedging ........................................................................ 23
3.1.3
Forward Delta ........................................................................................................ 23
3.1.4
Futures Delta ......................................................................................................... 24
3.2
THETA ......................................................................................................................... 24
3.3
GAMMA ........................................................................................................................ 26
3.3.1
4
Expected loss, Unexpected losses and Black swan events...................................... 1
Relationship Among Delta, Theta, and Gamma ..................................................... 29
3.4
VEGA ............................................................................................................................ 31
3.5
RHO .............................................................................................................................. 33
OPERATIONAL RISK .......................................................................................................... 35
- ii -
4.1
CATEGORISATION OF OPERATIONAL RISK ............................................................. 36
4.2
DETERMINATION OF REGULATORY CAPITAL .......................................................... 37
4.3
BIA/TSA/AMA ................................................................................................................ 38
4.4
LOSS SEVERITY AND LOSS FREQUENCY ................................................................ 39
4.5
FORWARD LOOKING APPROACHES ......................................................................... 40
4.5.1
Causal Relationships ............................................................................................. 40
4.5.2
RCSA and KRIs ..................................................................................................... 40
5
POPULAR RISK MEASURES FOR PRACTITIONERS ....................................................... 42
5.1
INTRODUCTION ........................................................................................................... 42
5.2
VAR ............................................................................................................................... 42
5.2.1
6
Computing value at risk. ........................................................................................ 43
5.3
STRESS TESTING. ....................................................................................................... 52
5.4
A NEW CLASSIFICATION OF RISK MEASURES ......................................................... 58
GROUP ASSIGNMENTS ..................................................................................................... 78
- iii -
- iv -
1. INTRODUCTION AND OVERVIEW OF RISK
This is an introductory topic that provides coverage of fundamental risk management concepts that
will
be
discussed
in
much
more
detail
throughout
the
course.
For the exam, it is important to understand the general risk measurement and risk management
process and its potential shortcomings. Also, the material on course objectives in the module
outline accompanying these notes contains several testable concepts.
1.1
THE CONCEPT OF RISK

Risk arises from the uncertainty regarding an entity’s future losses as well as future gains.

Risk is not necessarily related to the size of the potential loss. For example, many potential
losses are large but are quite predictable and can be provided for using risk management
techniques.

The more important concern is the variability of the loss, especially a loss that could rise to
unexpectedly high levels or a loss that suddenly occurs that was not anticipated.

Therefore, in simplified terms, there is a natural trade-off between risk and return.
As a starting point, risk management includes the sequence of activities aimed to reduce or
eliminate an entity’s potential to incur expected losses. On top of that, there is a need to
manage the unexpected variability of some costs.

In managing both expected and unexpected losses, risk management can be thought of as
a defensive technique.

However, risk management is broader in the sense that it considers how an entity can
consciously determine how much risk it is willing to take to earn future uncertain returns,
which involves risk-taking.

Risk-taking refers specifically to the active assumption of incremental risk to generate
incremental gains.

1.1.1
In that regard, risk-taking can be thought of in an opportunistic context.
Expected loss, unexpected losses and Black swan events
Expected loss considers how much an entity expects to lose in the normal course of business.

It can often be computed in advance (and provided for) with relative ease because of the
certainty involved.
-1-
For example, a retail business that provides credit terms on sales of goods to its customers
(i.e., no need to pay immediately) incurs the risk of non-payment by some of those customers.
If the business has been in operation for at least a few years, it could use its operating history
to reasonably estimate the percentage of annual credit sales that will never be collected. The
amount of the loss is therefore predictable and is treated as a regular cost of doing business
(i.e., bad debt expense on the income statement). It can be priced into the cost of the goods
directly in the case of the retail business. In contrast, in lines of business in the financial
sector, the cost could be recovered by charging commissions on various financial
transactions or by implementing spreads between a financial institution’s lending rate to
borrowers and its cost of obtaining those funds.
Unexpected loss considers how much an entity could lose outside of the normal course of
business.

Compared to expected loss, it is generally more difficult to predict, compute, and provide for
in advance because of the uncertainty involved.
For example, consider a commercial loan portfolio that is focused on loans to automotive
manufacturing companies. During an economic expansion that favours such companies
(because individuals have more disposable income to spend on items such as automobiles),
the lender will realize very few, if any, loan defaults. However, during an economic recession,
there is less disposable income to spend and many more loan defaults are likely to occur from
borrowers, likely at the same time. This is an example of correlation risk when unfavourable
events happen together. The correlation risk drives up the potential losses to unexpected
levels.
Another example of correlation risk lies with real estate loans secured by real property. Borrowers
tend to default on such loans (i.e., default rate risk) at the same time that the real property values
fall (i.e., recovery rate risk— the creditor’s collateral is worth less, thereby compromising the
recovery rate on the funds lent to the borrowers). These two risks occurring simultaneously could
drive up the potential losses to unexpected levels. Realizing the existence of correlation risks helps
a risk manager measure and manage unexpected losses with somewhat more certainty. For
example, historical analysis of the extent of such losses in the past due to correlation risk could be
performed, taking into account which risk factors were involved.
The black swan metaphor has gained increased attention in the risk management field, in
particular following the publication of the book ‘The Black Swan – the Impact of the Highly
Improbable’ by Taleb. The black swan type of event is characterised as an event that has the
following three attributes:
-2-
1. It is an outlier, as it lies outside the realm of regular expectations because nothing in the past
can convincingly point to its possibility.
2. It carries an extreme impact.
3. Despite its outlier status, human nature makes us concoct explanations for its occurrence after
the fact, making it explainable and predictable (retrospective predictability).
Three categories of black swan type of events can be distinguished:
a) Unknown unknowns
b) Unknown knowns
c) Events with negligible probability
1.2
RISK MANAGEMENT AND THE RISK MANAGEMENT PROCESS
Risk management
Is the process of identifying, assessing and addressing risk as well as reviewing and reporting it.
Key steps in risk management thus involve: Identifying – measure – monitor/evaluate and review
continuously
Identification of risk
This is a continuous process because risks change. There are two major processes of identifying
risks:
i.
Commissioning risk review - a group of individuals who look at the operations of an
organisation which can be at risk.
Risk Self Assessment – each department looks at its operations and identifies what could
ii.
be the risk factors to them and then they are managed by the central office.
Quantification of risk (Measurement)
Risk measurement should follow the following principle:
1. Record it in a way that facilitates the monitoring and identification of risk priorities.
2. Be clear about the inherent and residual risk.
3. Ensure that there is a structured process in which both likelihood and impact are assessed.
Addressing Risk (Monitoring/Evaluation)

There are five aspects of addressing risk i.e.
-3-
1. Accept/Tolerate – do not take any action but you just tolerate if it’s within risk appetite.
Accepting the risk means that while you have identified it and logged it in your risk
register, you take no action. You simply accept that it might happen and decide to deal
with it if it does. This is a good strategy to use for very small risks – risks that won’t have
much of an impact on your business if they happen and could be easily dealt with if or
when they arise
2. Mitigate The Risk- Mitigating against risk is probably the most commonly used risk
management technique. It’s also the easiest to understand and the easiest to
implement. What mitigation means is that you limit the impact of a risk, so that if it does
occur, the problem it creates is smaller and easier to fix. Take actions that are meant to
limit the impact of risk to acceptable levels.
3. Transfer – to transfer to someone willing to take care e.g. insurance companies, third
parties and pay premiums.
4. Avoid The Risk - You can also change your plans completely to avoid the risk. This is
a good strategy for when a risk has a potentially large impact on your business.
5. Exploit The Risk - Acceptance, avoidance, transference and mitigation are great to
use when the risk harms the project. But what if the risk has a positive impact?
Exploitation is the risk management strategy to use in these situations. Look for ways to
make the risk happen or for ways to increase the impact if it does.
Reviewing and Reporting
Reviewing – of key importance is to identify new risks and risks that have become irrelevant.
Reporting – it is important to report in a clear, concise and accessible language that is easily
understandable so that the right action can be taken.
Risk factors
They are environmental conditions that give rise to and cause risk exposures to arise i.e. the
chance that some identifiable things can happen that may impose economic costs or inflict to the
organization. The most commonly (used) recognized risk factors relate to changes in financial
market prices e.g. exchange rate risk, interest rate risk, commodity price risk and stock market
price risk.
The Risk Management Process
The risk management process thus involves the following five steps:
-4-
Step 1: Identify the risks.
Step 2: Quantify and estimate the risk exposures or determine appropriate methods to transfer the
risks.
Step 3: Determine the collective effects of the risk exposures or perform a cost-benefit
analysis of risk transfer methods.
Step 4: Develop a risk mitigation strategy (i.e., avoid, transfer, mitigate, or assume risk).
Step 5: Assess performance and amend the risk mitigation strategy as needed. In practice, this
process is not likely to operate perfectly in the above sequence.
Two key problems with the process include identifying the correct risk(s) and finding an efficient
method to mitigate the risk.
One of the challenges in ensuring that risk management will be beneficial to the economy is that
risk must be sufficiently dispersed among willing and able participants in the economy.
Unfortunately, a notable failure of risk management occurred during the financial crisis between
2007 and 2009 when it was subsequently discovered that risk was too concentrated among too
few participants.
Another challenge of the risk management process is that it has failed to consistently assist in
preventing market disruptions or preventing financial accounting fraud (due to corporate
governance failures). For example, the existence of derivative financial instruments greatly
facilitates the ability to assume high levels of risk and the tendency of risk managers to follow each
other’s actions (e.g., selling risky assets during a market crisis, which disrupts the market by
increasing its volatility). In addition, the use of derivatives as complex trading strategies assisted in
overstating the financial position (i.e., net assets on the balance sheet) of many entities and
understating the level of risk assumed by many entities. Even with the best risk management
policies in place, using such inaccurate information would not allow the policies to be effective.
Finally, risk management may not be effective on an overall economic basis because it only
involves risk transferring by one party and risk assumption by another party. It does not result in
overall risk elimination. In other words, risk management can be thought of as a zero-sum game in
that some “winning” parties will gain at the expense of some “losing” parties. However, if enough
parties suffer devastating losses due to an excessive assumption of risk, it could lead to a
widespread economic crisis.
-5-
1.3
RISK CLASSES
Financial risk is not a monolithic entity. The classic view of risk categorizes it into several broad
types:
1. market,
2. credit,
3. operational,
4. liquidity, and
5. legal and regulatory.
This classic view has provided a backbone for the development of the science of risk management.
The categories reveal fundamental differences in the economics of each type of risk. In many
financial institutions, these categories are also reflected in the organization of the risk management
function.
Market risk is generally defined as the risk of a decline in asset prices as a result of unexpected
changes in broad market factors related to equity, interest rates, currencies, or commodities.
Market risk is probably the best understood type of risk and the type for which large amounts of
good quality data are the most readily available. A variety of measures, such as standard
deviation, value at risk, and conditional value at risk is readily available to evaluate market risk.
Credit risk is the next best understood financial risk after market risk measures. It measures the
possibility of a decline in an asset price resulting from a change in the credit quality of a
counterparty or issuer (e.g., counterparty in an OTC –over the counter transaction, issuer of a
bond, reference entity of a credit default swap). Credit risk increases when the counterparty’s
perceived probability of default or rating downgrade increases. The estimation of credit loss
requires knowledge of several variables. These include the default probability, exposure at default
and loss given default. Five main methodologies are available to estimate these variables namely:
1. Credit migration,
2. Structural models,
3. Intensity models,
4. Actuarial approach, and
5. Large portfolio models.
Operational risk is defined by the Basel Committee as “the risk of loss resulting from inadequate
or failed internal processes, people and systems, or from external events.” Thus, operational risk
can result from such diverse causes as fraud, inadequate management and reporting structures,
-6-
inaccurate operational procedures, trade settlement errors, faulty information systems, or natural
disaster.
Liquidity risk is the risk of being unable to either raise the necessary cash to meet short term
liabilities (i.e., funding liquidity risk) or buy or sell a given asset at the prevailing market price
because of market disruptions (i.e., trading-related liquidity risk). The two dimensions are
interlinked because to raise cash to repay a liability (funding liquidity risk), an institution might need
to
sell
some
of
its
assets
(and
incur
trading-related
liquidity
risk).
Legal and regulatory risk is the risk of a financial loss that is the result of an erroneous
application of current laws and regulations or of a change in the applicable law (such as tax law).
The publication of numerous articles, working papers, and books has marked the unparalleled
advances in risk management.
Refer to the following reference, for a comprehensive discussion on the science of risk
management:
Das (2005) provided a general overview of the practice of risk management, mostly from the
perspective of derivatives contracts.
Embrechts, Frey, and McNeil (2005) emphasized the application of quantitative methods to risk
management.
Crouhy, Galai, and Mark (2001, 2006) are two solid risk management references for practitioners
working at international banks with special attention given to the regulatory framework.
Jorion (2007) gave an overview of the practice of risk management through information on banking
regulations, a careful analysis of financial disasters, and an analysis of risk management pitfalls.
He also made a strong case for the use of value-at-risk-based risk measurement and illustrated
several applications and refinements of the value-at-risk methodology.
Bernstein (1998) is another key reference. This masterpiece gives a vibrant account of the
evolution of the concept of risk.
1.4
RISK APPETITE FRAMEWORK
A defined risk appetite statement and a properly designed RAF are both required to properly
manage a firm’s risk and serve as an important element of corporate governance. They are meant
to be used together to provide clear guidance in risk management. They also attempt to achieve
-7-
risk management congruence regarding the expectations of the board of directors, senior
management, the firm’s risk management team, regulatory agencies, and stakeholders.
Risk appetite is defined as “the amount and type of risk that a firm is able and willing to accept in
pursuit of its business objectives”. The firm’s risk appetite must not exceed its risk capacity (i.e.,
the maximum amount of risk the firm can take). In developing a useful RAF to manage risk, the
RAF should not be viewed as a set of standalone rules or tasks. Instead, it needs to be viewed as
an integral part of a firm’s risk culture. RAFs require a substantial amount of judgment in terms of
development and focal points. Additionally, there are a wide variety of risk cultures and business
types in existence. As a result, a given firm should not expect to follow a “standard” approach when
employing an RAF for risk management purposes.
RAFs may assist in providing context for (measurable) controls such as risk policies and risk limits.
That context may help increase the awareness and acceptance throughout the firm of such
controls. Constant communication between the board, senior management, risk management, and
business unit managers about risk appetite and risk profiles are required to ensure that the RAF
functions effectively throughout the firm. Some attention should be given to how the RAF may
evolve over time and the nature of the risks taken in the individual business units to ensure they
are consistent with the overall risk appetite.
Properly transmitting the RAF within the firm together with incorporating the RAF into
making day-to-day operating decisions.
Although quantitative risk limits appear to be easily transmitted from the top down throughout a
firm, other areas within the RAF are less concrete and difficult to translate into risk policies and
measures when making operating decisions. Although top-level management may be well versed
in the RAF and its impact on the firm, it may not be the case with middle management, for
example. Therefore, the challenge is to train a broader range of employees on the details of the
RAF and how it is beneficial in order to achieve employee acceptance.
Establishing a clear connection between RAFs and risk culture.
A firm that possesses a strong risk culture may be able to reduce some of its reliance on strict
compliance with established limits and rules. However, a properly working IT system, robust
internal controls, and the existence of limits are still necessary even with a strong risk culture. Also,
risk appetite must be clearly tied to employee remuneration. For example, prior to any incentive
payouts, consider whether the firm’s financial performance was accomplished within the
established limits and was consistent with the risk culture and risk appetite.
Communicating risk appetite in a manner that captures all relevant risks.
-8-
This issue specifically refers to “qualitative” risks such as reputation risk or strategic risk in that
they are less quantifiable than “quantitative” risks such as market risk or credit risk. Regardless,
the challenge remains for the RAF to fully incorporate less quantifiable risks. Specifically, it is
difficult to identify and properly mitigate such risks. Solutions could include attempting to quantify
such risks through proxy measures as well as using both quantitative and qualitative objectives
when setting risk appetite.
The common view that risk appetite is mainly about setting limits.
A significant component of risk appetite involves limits and risk policies; however, there is another
important side that needs to be highlighted within the firm. Risk appetite has a strong basis in
managing the firm’s overall risk, determining its business strategy, and maximizing its return.
Again, education and training of the employees is necessary to ensure that they see the RAF as a
benefit and not an impediment concerning day-to-day activities. However, the essential point is that
the RAF should not be too strict and inflexible that it ignores changes in the business and its
strategy. At the same time, it should not be too loose and flexible that it is easily amended, lacks
discipline, or becomes impossible to understand and manage.
The lack of connection between risk appetite and the strategic and business planning
processes.
As stated earlier, an RAF may be set at the top level and, in turn, cascade downward in the form of
specific limits within the various business units. However, that perception of risk appetite needs to
be broadened and should allow for translation into specific guidance within the business units. In
other words, the key challenge is to integrate risk appetite throughout the firm to allow for the RAF
to directly impact the firm’s business and strategic decisions instead of being relegated a minor
role in the overall risk management process. The aim is to make the process as collaborative
between as many groups as possible.
The role of stress testing in the RAF.
Although there is little doubt that stress testing should be included in the RAF, there is a great deal
of uncertainty as to how exactly it should be included. The use of stress testing could range from
being a mere “sanity check” of the risk appetite to being the core component of it. The challenge
lies in interpreting the results of the stress tests and determining how much of the RAF should
account for extreme but plausible situations.
Aggregation of risks at the group level and then down to the individual business units.
-9-
Although in theory, the process should work easily, in practice, there is still no uniform method to
achieve it. The challenge lies in developing one standard approach to translate high-level (and
sometimes qualitative) risk appetite statements into more objective measures in the form of risk
limits and tolerances for each of the business units. There needs to be consistency in terms of how
the individual business units set their risk tolerances and the risk appetite of the overall firm.
- 10 -
- 11 -
2
2. INFAMOUS RISK MANAGEMENT DISASTERS
Risk management is an art as much as a science. It reflects not only the quantification of risks
through risk measurement but also a more profound and concrete understanding of the nature of
risk. The study of past financial disasters is an important source of insights and a powerful
reminder that when risks are not properly understood and kept in check, catastrophes may easily
occur. Following is a review of some past financial disasters.
2.1
METALLGESELLSCHAFT REFINING AND MARKETING (1993)
Although dated, the story of the Metallgesellschaft Refining and Marketing (MGRM) disaster is still
highly relevant today because it is a complex and passionately debated case.
Questions remain, such as, was MGRM’s strategy legitimate hedging or speculation? Could and
should the parent company, Metallgesellschaft AG, have withstood the liquidity pressure? Was the
decision to unwind the strategy in early 1994 the right one?
If the debates about the MGRM disaster show us anything, it is that risk management is more than
an application of quantitative methods and that key decisions and financial strategies are open to
interpretation and debate.
In December 1991, MGRM, the U.S.–based oil marketing subsidiary of German industrial group
Metallgesellschaft AG, sold forward contracts guaranteeing its customers certain prices for 5 or 10
years. By 1993, the total amount of contracts outstanding was equivalent to 150 million barrels of
oil-related products. If oil prices increased, this strategy would have left MGRM vulnerable. To
hedge this risk, MGRM entered into a series of long positions, mostly in short-term futures (some
for just one month). This practice, known as “stack hedging,” involves periodically rolling over the
contracts as they near maturity to maintain the hedge. In theory, maintaining the hedged positions
through the life of the longterm forward contracts eliminates all risk. But intermediate cash flows
may not match, which would result in liquidity risk. As long as oil prices kept rising or remained
stable, MGRM would be able to roll over its short-term futures without incurring significant cash
flow problems. Conversely, if oil prices declined, MGRM would have to make large cash infusions
in its hedging strategy to finance margin calls and roll over its futures. In reality, oil prices fell
through 1993, resulting in a total loss of $1.3 billion on the short-term futures by the end of the
year. Metallgesellschaft AG’s supervisory board took decisive actions by replacing MGRM’s senior
management and unwinding the strategy at an enormous cost. Metallgesellschaft AG was only
- 12 -
saved by a $1.9 billion rescue package organized in early 1994 by 150 German and international
banks.
Selected discussion in the literature
Mello and Parsons’ (1995) analysis generally supported the initial reports in the press that equated
the Metallgesellschaft strategy with speculation and mentioned funding risk as the leading cause of
the company’s meltdown.
Culp and Miller (1995a, 1995b) took a different view, asserting that the real culprit in the debacle
was not the funding risk inherent in the strategy but the lack of understanding of Metallgesellschaft
AG’s supervisory board. Culp and Miller further pointed out that the losses incurred were only
paper losses that could be compensated for in the long term. By choosing to liquidate the strategy,
the supervisory board crystallized the paper losses into actual losses and nearly bankrupted their
industrial group.
Edwards and Canter (1995) broadly agreed with Culp and Miller’s analysis: The near collapse of
Metallgesellschaft was the result of disagreement between the supervisory board and MGRM
senior management on the soundness and appropriateness of the strategy.
2.2
ORANGE COUNTY (1994)
At the beginning of 1994, Robert Citron, Orange County’s treasurer, was managing the Orange
County Investment Pool with equity valued at $7.5 billion. To boost the fund’s return, Citron
decided to use leverage by borrowing an additional $12.5 billion through reverse repos. The assets
under management, then worth $20 billion, were invested mostly in agency notes with an average
maturity of four years. Citron’s leveraged strategy can be viewed as an interest rate spread
strategy on the difference between the four-year fixed investment rate over the floating borrowing
rate.
The underlying bet is that the floating rate will not rise above the investment rate. As long as the
borrowing rate remains below the investment rate, the combination of spread and leverage would
generate an appreciable return for the investment pool. But if the cost of borrowing rises above the
investment rate, the fund would incur a loss that leverage would magnify.
Unfortunately for Orange County, its borrowing cost rose sharply in 1994 as the U.S. Federal
Reserve Board tightened its federal funds rate. As a result, the Orange County Investment Pool
accumulated losses rapidly. By December 1994, Orange County had lost $1.64 billion. Soon after,
the county declared bankruptcy and began liquidating its portfolio.
Selected discussions in the literature
- 13 -
Jorion (1997) pointed out that Citron benefited from the support of Orange County officials while
his strategy was profitable—it earned up to $750 million at one point. But he lost their support and
was promptly replaced after the full scale of the problem became apparent, which subsequently
resulted in the decisions to declare bankruptcy and liquidate the portfolio.
The opinion of Miller and Ross (1997), however, was that Orange County should neither have
declared bankruptcy nor liquidated its portfolio. If the county had held on to the portfolio, Miller and
Ross estimated that Orange County would have erased their losses and possibly have even made
some gains in 1995.
2.3
BARINGS
A single Singapore-based futures trader, Nick Leeson incurred a $1.3 billion loss that bankrupted
the 233-year-old Barings bank. Leeson had accumulated long positions in Japanese Nikkei 225
futures with a notional value totaling $7 billion. As the Nikkei declined, Leeson hid his losses in a
“loss account” while increasing his long positions and hoping that a market recovery would return
his overall position to profitability. But in the first two months of 1995, Japan suffered an
earthquake and the Nikkei declined by around 15 percent. Leeson’s control over both the front and
back office of the futures section for Barings Singapore was a leading contributor to this disaster
because it allowed him to take very large positions and hide his losses. Another main factor was
the blurry matrix-based organization charts adopted by Barings. Roles, responsibilities, and
supervision duties were not clearly assigned. This lack of organization created a situation in which
regional desks were essentially left to their own devices.
2.4
DAIWA
A New York–based trader for Daiwa Securities Group, Toshihide Igushi accumulated $1.1
billion of losses during an 11-year period. As in Leeson’s case, Igushi had control over both the
front and back offices, which made it easier to conceal his losses.
2.5
LONG-TERM CAPITAL MANAGEMENT (1998).
Veteran trader John Meriwether had launched this hedge fund in 1994.At the time of its collapse,
LTCM boasted such prestigious advisers and executives as Nobel Prize winners Myron Scholes
and Robert Merton. The fund relied on openly quantitative strategies to take non-directional
convergence or relative value long–short trade. For example, the fund would buy a presumably
cheap security and short sell a closely related and presumably expensive security, with the
- 14 -
expectation that the prices of both securities would converge. Initially a success, the fund collapsed
spectacularly in the summer of 1998, losing $4.4 billion, only to be rescued in extremis by the U.S.
Federal Reserve Bank and a consortium of banks.
Selected discussions in the literature
Jorion (2000) demonsrate using Markowitz’s mean–variance analysis that applying optimization
techniques to identify relative value and convergence trades often generates an excessive degree
of leverage. The resulting side effect is that the risk of the strategy is particularly sensitive to
changes in the underlying correlation assumptions. This danger was then compounded by LTCM’s
use of very recent price data to measure event risk. According to Jorion, “LTCM failed because of
its inability to measure, control, and manage its risk.” To prevent other such disasters, Jorion
suggested that risk measures should account for the liquidity risk arising in the event of forced
sales and that stress testing should focus on worst-case scenarios for the current portfolio.
2.6
FACTORS CONTRIBUTING THE 2007 GLOBAL CREDIT CRISIS
The period leading up to the 2007 credit crisis, especially the period between 2000 and
2006, was characterized by ever-increasing real estate prices within a very low interest rate
environment. This period saw a significant rise in subprime mortgage lending. Subprime mortgages
are mortgages that are considered to be a higher risk than traditional mortgages and are granted to
borrowers with weak credit histories.
2.6.1
Relaxed Lending Standards
With the increase in home prices in the USA leading up to the year 2000, many families found
themselves unable to qualify for mortgages and afford a home based on their incomes. In addition,
many families with weaker credit histories did not have a sufficiently strong credit profile to qualify
for a mortgage. Starting around 2000, mortgage lenders began to relax their mortgage underwriting
standards in order to attract new entrants into the market and began lending more to higher-risk
borrowers. It was typical for lenders to offer adjustable rate mortgages (ARMs) with teaser rates
that were very low for the first few years before the rates increased significantly in later years.
Teaser rates of 1% or 2% were not uncommon. From the lenders’ perspective, risks were low as
the continued increase in home prices meant that a potential borrower default was adequately
mitigated by a stable and increasing collateral value (i.e., the home).
- 15 -
At the same time, the federal government pressured lenders to increase lending to low and
medium- income households and was not incentivized to regulate mortgage lending. Relaxed
lending standards and the lack of adequate government regulation gave rise to predatory lending.
Liar loans (no vetting of the accuracy of an applicant’s information) and NINJA borrowers (no
income, no job, no assets) became common. As lending standards were relaxed, certain zip codes
in the United States that previously had high levels of rejected mortgage applications saw a
material rise in mortgage origination (i.e., more applications were accepted) during the 2000-2007
period.
2.6.2
The Housing Bubble
As mortgage origination increased and lending standards were relaxed, additional demand
continued to drive up home prices. However, by the second half of 2006, many of the teaser
ended. At the higher interest rates, borrowers could no longer afford their mortgages, and lenders
were forced to foreclose on their homes. This put downward pressure on demand, and home
prices started to decline. As more owners foreclosed, the supply of homes increased, and demand
and home prices declined further in a self-feeding loop. An important feature of mortgage lending
in the United States is that in several states, mortgages are nonrecourse. Under a nonrecourse
mortgage, a lender can only take possession of (have recourse to) the borrower’s home but not to
any of their other assets. It is important to understand the implications of this feature. In essence,
when borrowers took out a mortgage, they also purchased an American-style put option that
allowed them to sell their home at any time until mortgage expiration for the principal outstanding
on the mortgage. For borrowers, especially for those who borrowed 100% or close to 100% of the
value of their homes, this meant that when their home price declined below the outstanding value
of the mortgage resulting in negative equity in their homes, it was no longer in the borrower’s best
interest to service this mortgage. Instead, the borrower’s optimal decision was to exercise the put
option and sell the home to the lender at the price of their outstanding mortgage.
Many borrowers suffered greatly as they lost their family homes. For other borrowers, foreclosing
was simply the economically feasible solution. With the increase in foreclosures, lenders were
faced with diminishing recovery rates, which declined from an average 75% prior to the crisis to as
low as 25% during the crisis.
Other Failures:
Bankers Trust
- 16 -
AIG
Societe Generale
Société
Générale.doc
Lehman Bros
Northern Rock
Source: Hull, options, futures, and other derivatives, 2015
2.7
LESSONS FOR FINANCIAL RISK MANAGEMENT
Adequate Controls Are Crucial
- 17 -
Although a single source of risk may create large losses, it is not generally enough to result in an
actual disaster. For such an event to occur, several types of risks usually need to interact. Most
importantly, the lack of appropriate controls appears to be a determining contributor.
Although inadequate controls do not trigger the actual financial loss, they allow the organization to
take more risk than necessary and also provide enough time for extreme losses to accumulate.
Thus, risk management is a management problem. Financial disasters do not occur randomly—
they reveal deep flaws in the management and control structure. One way of improving control
structure is to keep the various trading, compliance, and risk management responsibilities
separated.
Is a large financial loss a failure of risk management?
The role of risk management involves performing the following tasks.

Assess all risks faced by the firm.

Communicate these risks to risk-taking decision makers.

Monitor and manage these risks (make sure that the firm only takes the necessary amount
of risk).
The risk management process focuses on the output of a particular risk metric [e.g., the value at
risk (VaR) for the firm] and attempts to keep the measure at a specified target amount. When a
given risk measure is above (below) the chosen target amount, the firm should decrease (increase)
risk. The risk management process usually evaluates several risk metrics (e.g., duration, beta). A
large loss is not necessarily an indication of a risk management failure. As long as risk managers
understood and prepared for the possibility of loss, then the implemented risk management was
successful. With that said, the main objective of risk management should not be to prevent losses.
However, risk management should recognize that large losses are possible and develop
contingency plans that deal with such losses if they should occur.
- 18 -
3
3. GREEK LETTER
The level of risk associated with an option position is dependent in large part on the following
factors:

relationship between the value of a position involving options and the value of the underlying
assets;

time until expiration;

asset value volatility;

and the risk-free rate.
Measures that capture the effects of these factors are referred to as “the Greeks” due to their
names: delta; theta; gamma; vega; and rho. Thus, a large part of this topic covers the evaluation of
option Greeks. Once option participants are aware of their Greek exposures, they can more
effectively hedge their positions to mitigate risk. .
3.1
DELTA
The delta of an option, Δ, is the ratio of the change in price of the call option, c, to the change in
price of the underlying asset, s, for small changes in s. Mathematically:
As illustrated in Figure 1, delta is the slope of the call option pricing function at the current stock
price. As shown in Figure 2, call option deltas range from zero to positive one, while put option
deltas range from negative one to zero.
- 19 -
3.1.1
Option Delta
A call delta equal to 0.6 means that the price of a call option on a stock will change by
approximately $0.60 for a $1.00 change in the value of the stock. To completely hedge a long
stock or short call position, an investor must purchase the number of shares of stock equal to delta
times the number of options sold.
Another term for being completely hedged is delta-neutral. For example, if an investor is short
1,000 call options, he will need to be long 600 (0.6 x 1,000) shares of the underlying. When the
value of the underlying asset increases by $1.00, the underlying position increases by $600, while
the value of his option position decreases by $600. When the value of the underlying asset
decreases by $1.00, there is an offsetting increase in value in the option position.
Derivation of Delta for a non-dividend Stock Options
- 20 -
From Black-Scholes option pricing model, we know the price of call option on a non-dividend stock
can be written as:
Ct  S t N d1   Xe  r N d 2 
(1)
and the price of put option on a non-dividend stock can be written as:
Pt  Xe r N  d2   St N  d1 
(2)
where
2
S    
ln  t    r  s 
2 
X 
d1 
s 
2
S    
ln  t    r  s  
2 
X 
d2 
 d1  s 
s 
 T t
N  is the cumulative density function of normal distribution.
N d1    f u du  
d1
d1


First, we calculate
N d 2  
u2
1 2
e du
2
N d1 
N d1  

d1
1
2
e

d12
2
(3)
N d 2 
d 2
d2
1  22

e
2
d   2
1  1 2s

e
2
d2
1  21 d1 s

e e
2

e
 St  

 s2
 s2 


1  21 ln X  r  2

e e
2
d2
2

e

 s2
2
d2
1  21 S t r

e  e
X
2
(4)
Eq. (2) and Eq. (3) may be used repetitively in determining following Greek letters when the
underlying asset is a non-dividend paying stock.
- 21 -
For a European call option on a non-dividend stock, delta can be shown as
  N(d1 )
(.5)
The derivation of Eq. (5) is in the following:

N  d1 
N  d 2 
C t
 N  d1   St
 Xe  r
St
St
St
 N  d1   St
N  d1  d1
N  d 2  d 2
 Xe  r
d1 St
d 2 St
2
 N  d1   St
 N  d1   St
 N  d1 
2
1  d21
1
1  d21 St r
1
e 
 Xe  r
e  e 
X
2
St  s 
2
St  s 
1
St s 2
e

d12
2
 St
1
St s 2
e

d12
2
For a European put option on a non-dividend stock, delta can be shown as
  N(d1 )  1
(6)
The derivation of Eq. (6) is

N  d 2 
N  d1 
Pt
 Xe  r
 N  d1   St
St
St
St
 Xe  r
 (1  N  d 2 ) d 2
 (1  N  d1 ) d1
 (1  N  d1 )  St
d 2
St
d1
St
2
  Xe
 St
 r
2
1  d21 St r
1
1  d21
1
e  e 
 (1  N  d1 )  St
e 
X
2
St  s 
2
St  s 
1
St s 2
 N  d1   1
e

d12
2
 N  d1   1  St
1
St s 2
e

d12
2
Example: Computing delta
Suppose that OM stock is trading at $50, and there is a call option that trades on OM with an
exercise price of $45 which expires in three months. The risk-free rate is 5% and the standard
deviation of returns is 12% annualized. Determine the value of the call option’s delta.
- 22 -
Answer 0.9767
This means that when the stock price changes by $1, the option price will change by 0.9767.
NB look up the value of d1 in the normal probability tables, or accessible through excel using the
‘normsdist’ function.
3.1.2
Dynamic Aspects of Delta Hedging
As we saw in Figure 1, the delta of an option is a function of the underlying stock price. That
means when the stock price changes, so does the delta. When the delta changes, the portfolio will
no
longer
be
hedged
(i.e.,
the
number
of
options
and
underlying
stocks
will
no longer be in balance), and the investor will need to either purchase or sell the underlying asset.
This rebalancing must be done on a continual basis to maintain the delta-neutral hedged position.
The goal of a delta-neutral portfolio (or delta-neutral hedge) is to combine a position in an asset
with a position in an option so that the value of the portfolio does not change with changes in the
value of the asset. In referring to a stock position, a delta-neutral portfolio can be made up of a
risk-free combination of a long stock position and a short call position where the number of calls to
short is given by 1 /Δc.
number
of
options
needed
to
delta
hedge
=
--------------------------------
delta of call option
3.1.3
Forward Delta
The delta of a forward position is equal to one, implying a one-to-one relationship between the
value of the forward contract and its underlying asset. A forward contract position can easily be
hedged with an offsetting underlying asset position with the same number of securities.
- 23 -
When the underlying asset pays a dividend, q, the delta of an option or forward must be adjusted.
If a dividend yield exists, the delta for a call option equals e
-qT
equals e
3.1.4
-qT
N (d1), the delta of a put option
x [N(d1) – 1] and the delta of a forward contract equals e –qT
Futures Delta
Unlike forward contracts, the delta of a futures position is not ordinarily one because of the spotfutures parity relationship. For example, the delta of a futures position is erT on a stock or stock
index that pays no dividends, where r is the risk-free rate and T is the time to maturity. Assets that
pay a dividend yield, q, would generate a delta equal to e(r-q)T
An investor would hedge short futures positions by going long the amount of the deliverable asset.
3.2
THETA (  )
The theta of an option, Θ, also termed the “time decay” of an option is defined as the rate of
change of the option price respected to the passage of time:


t
where  is the option price and t is the passage of time.
If   T  t , theta (  ) can also be defined as minus one timing the rate of change of the option
price respected to time to maturity. The derivation of such transformation is easy and straight
forward:

  


 (1)
t
 t

where   T  t is time to maturity. For the derivation of theta for various kinds of stock option, we
use the definition of negative differential on time to maturity. Figure 3 overleaf shows the
relationship of theta with stock prices and with time.
- 24 -
In class task
Attempt Derivation of Theta for Different Kinds of a non-dividend paying Stock Option
For a European call option on a non-dividend stock, theta can be written as:

St s
2 
 N(d1 )  rX  e r N(d 2 )
For a European put option on a non-dividend stock, theta can be shown as

St s
2 
 N(d1 )  rX  e r N(d 2 )
Note that theta in the above equations is measured in years. It can be converted to a daily basis by
dividing by 365. To find the theta for each trading day, you would divide by 252.
- 25 -
The specific characteristics of theta are as follows:
• Theta affects the value of put and call options in a similar way (e.g., as time passes, most call
and put options decrease in value, all else equal).
• Theta varies with changes in stock prices and as time passes.
• Theta is most pronounced when the option is at-the-money, especially nearer to expiration. The
left side of Figure 3 illustrates this relationship.
• Theta values are usually negative, which means the value of the option decreases as it gets
closer to expiration.
• Theta usually increases in absolute value as expiration approaches. The right side of
Figure 3 illustrates this relationship.
• It is possible for a European put option that is in-the-money to have a positive theta value
3.3
GAMMA (  )
The gamma of an option,  , is defined as the rate of change of delta respected to the rate of
change of underlying asset price::
  2


S S2
where  is the option price and S is the underlying asset price.
- 26 -
Because the option is not linearly dependent on its underlying asset, delta-neutral hedge strategy
is useful only when the movement of underlying asset price is small. Once the underlying asset
price moves wider, gamma-neutral hedge is necessary. The formula of gamma for various kinds of
stock option follows.
Formula
For a European call option on a non-dividend stock, gamma can be shown as

1
N  d1 
St s 
The derivation of is
 C t 



St 
 2Ct



St 2
St

N  d1  d1

d1
St
 N  d1  

1
St
s 
1
N  d1 
St  s 
For a European put option on a non-dividend stock, gamma can be shown as

1
N  d1 
St s 
In class work
Attempt derivation of the gamma for European put option on a non-dividend stock given above
- 27 -
Figure 4 illustrates the relationship between gamma and the stock price for a stock option. As
indicated in Figure 4, gamma is largest when an option is at-the-money (at stock price = X). When
an option is deep in-the-money or out-of-the-money, changes in stock price have little effect on
gamma
When gamma is large, delta will be changing rapidly. On the other hand, when gamma is small,
delta will be changing slowly. Since gamma represents the curvature component of the call-price
function not accounted for by delta, it can be used to minimize the hedging error associated with a
linear relationship (delta) to represent the curvature of the call-price function.
- 28 -
Delta-neutral positions can hedge the portfolio against small changes in stock price, while gamma
can help hedge against relatively large changes in stock price. Therefore, it is not only desirable to
create a delta-neutral position but also to create one that is gamma-neutral. In that way, neither
small
nor
large
stock
price
changes
adversely
affect
the
portfolio’s value. Since underlying assets and forward instruments generate linear payoffs, they
have zero gamma and, hence, cannot be employed to create gamma-neutral positions.
Gamma-neutral positions have to be created using instruments that are not linearly related to the
underlying instrument, such as options. The specific relationship that determines the number of
options that must be added to an existing portfolio to generate a gamma-neutral position is -(Γp/ΓT),
where Γp is the gamma of the existing portfolio position, and ΓT is the gamma of a traded option
that can be added. Let’s take a look at an example.
3.3.1
Relationship Among Delta, Theta, and Gamma
Stock option prices are affected by delta, theta, and gamma as indicated in the following
relationship:
rΠ = Θ + rSΔ + 0.5σ2S2Γ
- 29 -
where:
r = the risk-neutral risk-free rate of interest
Π = the price of the option
Θ = the option theta
S = the price of the underlying stock
Δ = the option delta
σ2 = the variance of the underlying stock
Γ = the option gamma
This equation shows that the change in the value of an option position is directly affected
by its sensitivities to the Greeks. For a delta-neutral portfolio, Δ = 0, so the preceding equation
reduces to:
rΠ = Θ + 0.5σ2S2Γ
The left side of the equation is the dollar risk-free return on the option (risk-free rate times option
value). Assuming the risk-free rate is small, this demonstrates that for large positive values of
theta, gamma tends to be large and negative, and vice versa, which explains the common practice
of using theta as a proxy for gamma.
For example, if a portfolio of options has a delta equal to $10000 and a gamma equal to $5000, the
change in the portfolio value if the stock price drop to $34 from $35 is approximately,
1
change in portfolio value  ($10000)  ($34  $35)   ($5000)  ($34  $35)2
2
 $7500
The above analysis can also be applied to measure the price sensitivity of interest rate related
assets or portfolio to interest rate changes. Here we introduce Modified Duration and Convexity as
risk measure corresponding to the above delta and gamma. Modified duration measures the
percentage change in asset or portfolio value resulting from a percentage change in interest rate.


Change in price
Modified Duration  
 Price
 Change in interest rate 
  / P
Using the modified duration,
Change in Portfolio Value    Change in interest rate
 (  Duration  P)  Change in interest rate
- 30 -
we can calculate the value changes of the portfolio. The above relation corresponds to the
previous discussion of delta measure. We want to know how the price of the portfolio changes
given a change in interest rate. Similar to delta, modified duration only show the first order
approximation of the changes in value. In order to account for the nonlinear relation between the
interest rate and portfolio value, we need a second order approximation similar to the gamma
measure before, this is then the convexity measure. Convexity is the interest rate gamma divided
by price,
Convexity   / P
and this measure captures the nonlinear part of the price changes due to interest rate changes.
Using the modified duration and convexity together allow us to develop first as well as second
order approximation of the price changes similar to previous discussion.
Change in Portfolio Value   Duration  P  (change in rate)
1
  Convexity  P  (change in rate)2
2
As a result, (-duration x P) and (convexity x P) act like the delta and gamma measure respectively
in the previous discussion. This shows that these Greeks can also be applied in measuring risk in
interest rate related assets or portfolio.
Next we discuss how to make a portfolio gamma neutral. Suppose the gamma of a delta-neutral


portfolio is  , the gamma of the option in this portfolio is o , and o is the number of options
added to the delta-neutral portfolio. Then, the gamma of this new portfolio is
o o  
To make a gamma-neutral portfolio, we should trade
o*   /  o
options. Because the position
of option changes, the new portfolio is not in the delta-neutral. We should change the position of
the underlying asset to maintain delta-neutral.
For example, the delta and gamma of a particular call option are 0.7 and 1.2. A delta-neutral
portfolio has a gamma of -2,400. To make a delta-neutral and gamma-neutral portfolio, we should
add a long position of 2,400/1.2=2,000 shares and a short position of 2,000 x 0.7=1,400 shares in
the original portfolio.
3.4
VEGA
Vega measures the sensitivity of the option’s price to changes in the volatility of the underlying
stock. For example, a vega of 8 indicates that for a 1% increase in volatility, the option’s price will
- 31 -
increase by 0.08. For a given maturity, exercise price, and risk-free rate, the vega of a call is equal
to the vega of a put.
Vega can be shown as
  St   N  d1 
Suppose a delta-neutral and gamma-neutral portfolio has a vega equal to  and the vega of a
particular option is
 o . Similar to gamma, we can add a position of  /  o in option to make a
vega-neutral portfolio. To maintain delta-neutral, we should change the underlying asset position.
However, when we change the option position, the new portfolio is not gamma-neutral. Generally,
a portfolio with one option cannot maintain its gamma-neutral and vega-neutral at the same time. If
we want a portfolio to be both gamma-neutral and vega-neutral, we should include at least two kind
of option on the same underlying asset in our portfolio.
Example,
A delta-neutral and gamma-neutral portfolio contains option A, option B, and underlying asset. The
gamma and vega of this portfolio are -3,200 and -2,500, respectively. Option A has a delta of 0.3,
gamma of 1.2, and vega of 1.5. Option B has a delta of 0.4, gamma of 1.6 and vega of 0.8. The
- 32 -
new portfolio will be both gamma-neutral and vega-neutral when adding
A of option A and B of
option B into the original portfolio.
Gamma Neutral:  3200  1.2A  1.6B  0
Vega Neutral:  2500  1.5A  0.8B  0
From two equations shown above, we can get the solution that
A =1000 and B = 1250. The delta
of new portfolio is 1000 x .3 + 1250 x 0.4 = 800. To maintain delta-neutral, we need to short 800
shares of the underlying asset.
3.5
RHO
Rho, ρ, measures an option’s sensitivity to changes in the risk-free rate. Keep in mind, however,
that equity options are not as sensitive to changes in interest rates as they are to changes in the
other variables (e.g., volatility and stock price). Large changes in rates have only small effects on
equity option prices. Rho is a much more important risk factor for fixed-income derivatives.
In-the-money calls and puts are more sensitive to changes in rates than out-of-the-money options.
Increases in rates cause larger increases for in-the-money call prices (versus out of the-money
calls) and larger decreases for in-the-money puts (versus out-of-the-money puts).
The rho for an ordinary stock call option should be positive because higher interest rate reduces
the present value of the strike price which in turn increases the value of the call option. Similarly,
the rho of an ordinary put option should be negative by the same reasoning. We next show the
derivation of rho for various kinds of stock option.
For a European call option on a non-dividend stock, rho can be shown as
rho  X  e  r N(d 2 )
For a European put option on a non-dividend stock, rho can be shown as
rho   X  e  r N(d 2 )
- 33 -
Example
Assume that an investor would like to see how interest rate changes affect the value of a 3-month
European put option she holds with the following information. The current stock price is $65 and
the strike price is $58. The interest rate and the volatility of the stock is 5% and 30% per annum
respectively. The rho of this European put can be calculated as following.
Rho put
1
ln(65 58)  [0.05  (0.3) 2 ](0.25)
2
 Xe r N(d 2 )  ($58)(0.25)e  (0.05)(0.25) N(
)  3.168
(0.3) 0.25
This calculation indicates that given 1% change increase in interest rate, say from 5% to 6%, the
value of this European call option will decrease 0.03168 (0.01 x 3.168).
Further notes on Greeks
Further Notes on
UNIT III_Greeks.docx
- 34 -
4
4. OPERATIONAL RISK
Regulatory frameworks, such as the Basel II Accord, have sparked an intense interest in the
modelling of operational risk.
Basel II rightfully acknowledges operational risk as the main source of financial risk. In fact, even if
operational risk does not reach the disastrous levels observed in such downfalls as Barings or
Daiwa, it may still take a heavy toll. Recent studies analysed the effect of operational loss on the
market value of U.S. banks and found that a statistically significant drop in their share price
occurred and that the magnitude of this fall tended to be larger than that of the operational loss.
As can be expected, operational risk is more difficult to estimate than credit risk and far more
difficult than market risk. Similar to credit risk, the main obstacle in the application of risk measures
to operational risk remains the generation of a probability distribution of operational loss. Most of
the technical developments in the measurement of operational risk have taken place in the past 10
years because increased awareness and regulatory pressures combined to propel operational risk
to centre stage.
A number of actuarial techniques and tools are used to evaluate operational risk. All the techniques
have one common feature in that they attempt to circumvent operational risk’s greatest technical
and analytical difficulty-the sparseness of available data. This relative lack of data is the result of
several factors. To begin with, the existence of operational risk databases is quite recent.
Moreover, occu rrences of some of the operational risk, such as system failure, may be rare.
Finally, industrywide database sharing efforts are still in their infancy. Among the techniques is
extreme value theory (EVT) which deserves a special mention.
With its emphasis on the analysis and modelling of rare events and its roots in statistical and
probabilistic theory, EVT constitutes an essential and very successful set of techniques for
quantifying operational risk. As its name indicates, EVT was originally designed to analyse rare
events, or conversely to develop statistical estimates when only a few data points are reliable.
Insurance companies exposed to natural disasters and other “catastrophes” have quickly adopted
EVT.
Referenced in the literature operational risk modelling
Embrechts, Klüppelberg, and Mikosch (2008) provided a thorough reference on EVT
and its applications to finance and insurance,
Embrechts, Frey, and McNeil (2005, ch. 10) demonstrated the use of EVT in the context of
operational risk.
- 35 -
Chavez-Demoulin, Embrechts, and Nešlehová (2006) introduced useful statistical and probabilistic
techniques to quantify operational risk. In particular, they discussed EVT and a number of
dependence and interdependence modelling techniques.
Chernobai, Rachev, and Fabozzi (2007) proposed a related, although slightly more probabilistic,
treatment of operational risk, with a particular emphasis on the Basel II requirements and a
discussion of VaR for operational risk.
Jarrow (2008) proposed to subdivide operational risk for banks into (1) the risk of a loss as a result
of the firm’s operating technology and (2) the risk of a loss as a result of agency costs. Jarrow
observed that contrary to market and credit risk, which are both external to the firm, operational
risk is internal to the firm. In his opinion, this key difference needs to be addressed in the design of
estimation techniques for operational risk. Jarrow further suggested that current operational risk
methodologies result in an upwardly biased estimation of the capital required because they do not
account for the bank’s net present value generating process, which in his view, should at least
cover the expected portion of operational risk
4.1
CATEGORISATION OF OPERATIONAL RISK
The Basel Committee on Bank Supervision has identified seven categories of operational risk.
These are:
1. Internal fraud
Acts of a type intended to defraud, misappropriate property or circumvent regulations, the law, or
company policy (excluding diversity or discrimination events which involve at least one internal
party). Examples include intentional misreporting of positions, employee theft, and insider trading
on an employee's own account.
2. External fraud
Acts by third party of a type intended to defraud, misappropriate property or circumvent the law.
Examples include robbery, forgery, check kiting, and damage from computer hacking.
3. Employment practices and workplace safety
Acts inconsistent with employment, health or safety laws or agreements, or which result in
payment of personal injury claims, or claims relating to diversity or discrimination issues. Examples
include workers compensation claims, violation of employee health and safety rules, organized
labour activities, discrimination claims, and general liability (e.g., a customer slipping and falling at
a branch office).
4. Clients, products, and business practices
Unintentional or negligent failure to meet a professional obligation to specific clients (including
fiduciary and suitability requirements), or from the nature or design of a product. Examples include
- 36 -
fiduciary breaches, misuse of confidential customer information, improper trading activities on the
bank's account, money laundering, and the sale of unauthorised products.
5. Damage to physical assets
Loss or damage to physical assets from natural disasters or other events. Examples include
terrorism, vandalism, earthquakes, fires, and floods.
6. Business disruption and system failures
Disruption of business or system failures. Examples include hardware and software failures,
telecommunication problems, and utility outages.
7. Execution, delivery, and process management
Failed transaction processing or process management, and relations with trade counterparties and
vendors. Examples include data entry errors, collateral management failures, incomplete legal
documentation,
unapproved
access
given
to
clients
accounts,
non-client
counterparty
misperformance, and vendor disputes
4.2
DETERMINATION OF REGULATORY CAPITAL
Three alternatives for determining operational risk regulatory capital are prescribe under Basel II.
1. The basic indicator approach.
Under this approach, operational risk capital is set equal to 15% of annual gross income over the
previous three years. Gross income is defined as net interest income plus noninterest income.1
2. The standardized approach,
A bank's activities are divided into eight business lines namely:
corporate finance,
trading and sales,
retail banking,
commercial banking,
payment and settlement,
agency services,
asset management,
and retail brokerage.
The average gross income over the last three years for each business line is multiplied by a "beta
factor" for that business line and the result summed to determine the total capital.
The beta factors are shown in the table below
Business Line
Beta Factor
corporate finance,
18%
- 37 -
trading and sales,
18%
retail banking,
12%
commercial banking,
15%
payment and settlement,
18%
agency services,
15%
asset management,
12%
and retail brokerage
12%
.
3. The advanced measurement approach (AMA)
The operational risk regulatory capital requirement is calculated by the bank internally using
qualitative and quantitative criteria. The Basel Committee has listed conditions that a bank must
satisfy in order to use the standardized approach or the AMA approach. It expects large
internationally active banks to move toward adopting the AMA.
4.3
To
BIA/TSA/AMA
use
the
standardized
approach,
a
bank
must
satisfy the following conditions:
1. The bank must have an operational risk management function that is responsible for identifying,
assessing, monitoring, and controlling operational risk.
2. The bank must keep track of relevant losses by business line and must create incentives for the
improvement of operational risk.
3. There must be regular reporting of operational risk losses throughout the bank.
4. The bank's operational risk management system must be well documented.
5. The bank's operational risk management processes and assessment system must be subject to
regular independent reviews by internal auditors. It must also be subject to regular review by
external auditors or supervisors or both.
To use the AMA approach, the bank must satisfy additional requirements. It must be able to
estimate unexpected losses based on an analysis of relevant internal and external data, and
scenario analyses. The bank's system must be capable of allocating economic capital for
operational risk across business lines in a way that creates incentives for the business lines to
improve operational risk management. The objective of banks using the AMA approach for
operational risk is analogous to their objectives when they attempt to quantify credit risk.
- 38 -
Assuming that they can convince regulators that their expected operational risk cost is
incorporated into their pricing of products, capital is assigned to cover unexpected costs. The
confidence level is 99.9% and the time horizon is one year.
4.4
LOSS SEVERITY AND LOSS FREQUENCY
There are two distributions that are important in estimating potential operational risk losses. One is
the loss frequency distribution and the other is the loss severity distribution.
The loss frequency distribution is the distribution of the number of losses observed during the time
horizon (usually one year).
The loss severity distribution is the distribution of the size of a loss, given that a loss occurs.
It is usually assumed that loss severity and loss frequency are independent.
For loss frequency, the natural probability distribution to use is a Poisson distribution. This
distribution assumes that losses happen randomly through time so that in any short period of time
there is a probability of a loss being sustained. The probability of n losses in time T is
The parameter, λ, can be estimated as the average number of losses per unit time. For example, if
during a 10-year period there were a total 12 losses, then is 1.2 per year or 0.1 per month. A
Poisson distribution has the property that the mean frequency of losses equals the variance of the
frequency of losses.
For the loss severity probability distribution, a lognormal probability distribution is often used. The
parameters of this probability distribution are the mean and standard deviation of the logarithm of
the loss.
The loss frequency distribution must be combined with the loss severity distribution for each loss
type and business line to determine a total loss distribution. Monte Carlo simulation can be used
for this purpose. As mentioned earlier, the usual assumption is that loss severity is independent of
loss frequency. On each simulation trial, we proceed as follows:
1. We sample from the frequency distribution to determine the number of loss events (= n).
- 39 -
2. We sample n times from the loss severity distribution to determine the loss experienced for each
loss event (L1, L2,…, Ln)
3. We determine the total loss experienced. When many simulation trials are used, we obtain a
total loss distribution (L1 + L2 +…+ Ln)
4.5
FORWARD-LOOKING APPROACHES
Risk managers should try to be proactive in preventing losses from occurring. One approach is to
monitor what is happening at other banks and try and learn from their mistakes. When a $700
million rogue trader loss happened at a Baltimore subsidiary of Allied Irish Bank in 2002, risk
managers throughout the world studied the situation carefully and asked: Could this happen to
us?"
It immediately led to all banks instituting procedures for checking that counterparties had the
authority to enter into derivatives transactions.
4.5.1
Causal Relationships
Operational risk managers should try and establish causal relations between decisions taken and
operational risk losses. Does increasing the average educational qualifications of employees
reduce losses arising from mistakes in the way transactions are processed? Will a new computet
system reduce the probabilities of losses from system failures? Are operational risk losses
correlated with the employee turnover rate? If so, can they be reduced by measures taken to
improve employee retention? Can the risk of a rogue trader be reduced by the way responsibilities
are divide between different individuals and by the way traders are motivated? One approach to
establishing causal relationships is statistical. If we look at 12 different locations where a bank
operates and find a high negative correlation between the education of back office employees and
the cost of mistakes in processing transactions, it might well make sense to do a cost-benefit
analysis of changing the educational requirements for a back-office job in some of the locations. In
some cases, a detailed analysis of the cause of losses may provide insights. For example, if 40%
of computer failures can be attributed to the fact that the current hardware is several years old and
less reliable than newer versions, a cost-benefit analysis of upgrading is likely to be useful.
4.5.2
RCSA and KRIs
Risk and control self-assessment (RCSA) is an important way in which banks try and achieve a
better understanding of their operational risk exposures. This involves asking the managers of the
- 40 -
business units themselves to identify their operational risks. Sometimes questionnaires designed
by senior management are used. A by-product of any program to measure and understand
operational risk is likely to be the development of key risk indicators (KRIs). Risk indicators are key
tools in the management of operational risk. The most important indicators are prospective. They
provide an early-warning system to track the level of operational risk in the organization. Examples
of key risk indicators are staff turnover and number of failed transactions. The hope is that key risk
indicators can identify potential problems and allow remedial action to be taken before losses are
incurred.
It is important for a bank to quantify operational risks, but it is even more important to take action to
control and manage those risks.
- 41 -
5
5.1
POPULAR RISK MEASURES FOR PRACTITIONERS
INTRODUCTION
The measurement of risk is at the confluence of the theory of economics, the statistics of actuarial
sciences, and the mathematics of modern probability theory. This confluence has provided a fertile
environment for the emergence of a multitude of risk measures listed below
Origin
Risk Measure
Investment theory
Variance and standard deviation
Modern risk management
Value at risk
Expected shortfall
Conditional Value at Risk
Worst case expectation
Street measure
Omega
Measures from Investment Theory: Variance and Standard Deviation.
Risk is a cornerstone of the modern portfolio theory pioneered by Markowitz, Sharpe, Treynor,
Lintner, and Mosin. Research in investment management has resulted in the development of
several commonly accepted risk measures, such as variance, standard deviation, beta, and
tracking error. These risk measures are extensively discussed in corporate finance and portfolio
theory. The emphasis of this course is on modern risk management.
Modern Risk Management Measures
Modern risk management measures were born from the phenomenal development of the theory
and
practice
of
risk
measurement
in
the
past
15
years.
In
the
words
of
Elroy Dimson, as relayed by Peter Bernstein, risk is when “more things can happen than will
happen.” Probabilities provide a theory and toolbox to address this particular type of problem. As a
result, risk measurement is deeply rooted in the theory of probability. Value at risk, expected
shortfall, conditional value at risk, and worst-case expectation are four of the most common and
fundamental modern risk measures. We turn our focus to VaR
5.2
VAR
- 42 -
Value at risk (VaR) is a probabilistic method of measuring the potential loss in portfolio value over
a given time period and for a given distribution of historical returns. VaR is the dollar or percentage
loss in portfolio (asset) value that will be equaled or exceeded only X percent of the time. In other
words, there is an X percent probability that the loss in portfolio value will be equal to or greater
than the VaR measure. VaR can be calculated for any percentage probability of loss and over any
time period. A 1%, 5%, and 10% VaR would be denoted as VaR(1%), VaR(5%), and VaR(10%),
respectively. The risk manager selects the X percent probability of interest and the time period over
which VaR will be measured. Generally, the time period selected (and the one we will use) is one
day.
5.2.1
Computing value at risk.
Three methods are commonly used to compute the VaR of a portfolio: delta normal, historical
simulation, and Monte Carlo simulation.
The delta-normal methodology is an analytic approach that provides a mathematical formula for
the VaR and is consistent with mean–variance analysis. Delta-normal VaR assumes that the risk
factors are lognormally distributed (i.e., their log-returns are normally distributed) and that the
securities returns are linear in the risk factors. These assumptions are also the main shortcoming
of the method: The normality assumption does not generally hold and the linearity hypothesis is not
validated for nonlinear assets, such as fixed-income securities or options.
Calculating delta-normal VaR is a simple matter but as has been noted, requires assuming that
asset returns conform to a standard normal distribution. Recall that a standard normal distribution
is defined by two parameters, its mean (µ = 0) and standard deviation (σ = 1), and is perfectly
symmetric with 50% of the distribution lying to the right of the mean and 50% lying to the left of the
mean. The Figure overleaf illustrates the standard normal distribution and the cumulative
probabilities under the curve.
- 43 -
From the Figure, we observe the following:
a. the probability of observing a value more than 1.28 standard deviations below the mean is
10%;
b. the probability of observing a value more than 1.65 standard deviations below the mean is
5%; and
c. the probability of observing a value more than 2.33 standard deviations below the mean is
1%.
Thus, we have critical z-values of-1.28, -1.65, and -2.33 for 10%, 5%, and 1% lower tail
probabilities, respectively. We can now define percent VaR mathematically as:
VaR(X%) = zX% σ,
VaR(X%) is the X% probability value at risk
Example: Calculating percentage and dollar VaR
A risk management officer at a bank is interested in calculating the VaR of an asset that he is
considering adding to the bank’s portfolio. If the asset has a daily standard deviation of returns
equal to 1.4% and the asset has a current value of $5.3 million, calculate the VaR (5%) on both a
percentage and dollar basis.
Answer:
- 44 -
The appropriate critical z-value for a VaR (5%) is -1.65. Using this critical value and the
asset’s standard deviation of returns, the VaR (5%) on a percentage basis is calculated as
follows:
VaR (5%) = -1.65(0.014) = -0.0231 = -2.31%
The VaR(5%) on a dollar basis is calculated as follows:
VaR (5%)= -0.0231 x $5,300,000 = -$122,430
Thus, there is a 5% probability that, on any given day, the loss in value on this particular
asset will equal or exceed 2.31%, or $122,430.
If an expected return other than zero is given, VaR becomes the expected return minus the
quantity of the critical value multiplied by the standard deviation.
VaR = [E(R) - zσ]
Example: Calculating VaR given an expected return
For
a
$100,000,000
portfolio,
the
expected
1-week
portfolio
return
and
standard
deviation are 0.00188 and 0.0125, respectively. Calculate the 1-week VaR at 5% significance.
Answer:
VaR
= [E(R) — zσ] x portfolio value
= [0.00188- 1.65(0.0125)] x $100,000,000
= 0.018745 x $100,000,000
= -$1,874,500
The manager can be 95% confident that the maximum 1-week loss will not exceed $1,874,500.
VaR Conversions
VaR, as calculated previously, measured the risk of a loss in asset value over a short time period.
Risk managers may, however, be interested in measuring risk over longer time periods, such as a
month, quarter, or year. VaR can be converted from a 1-day basis to a longer basis by multiplying
the daily VaR by the square root of the number of days (J) in the longer time period (called the
square root rule). For example, to convert to a weekly VaR, multiply the daily VaR by the square
root of 5 (i.e., five business days in a week). We can generalize the conversion method as follows:
- 45 -
Example: Converting daily VaR to other time bases
Assume that a risk manager has calculated the daily VaR (10%) of a particular asset to be
$12,500. Calculate the weekly, monthly, semiannual, and annual VaR for this asset. Assume 250
days per year and 50 weeks per year.
Answer
weekly
= $27,951
monthly
= $55,902
Semiannual
= $139,754
Annual
= $197,642
VaR can also be converted to different confidence levels. For example, a risk manager want to
convert VaR with a 95% confidence level to VaR with a 99% confidence level. This conversion is
done by adjusting the current VaR measure by the ratio of the updated confidence level to the
current confidence level.
Example: Converting VaR to different confidence levels
Assume that a risk manager has calculated VaR at a 95% confidence level to be $16,500.
Now assume the risk manager wants to adjust the confidence level to 99%. Calculate the VaR at a
99% confidence level.
Answer:
The VaR M ethods
The three main VaR methods can be divided into two groups: linear methods and full
valuation methods.
- 46 -
1. Linear methods replace portfolio positions with linear exposures on the appropriate risk
factor. For example, the linear exposure used for option positions would be delta while the linear
exposure for bond positions would be duration. This method is used when calculating VaR with the
delta-normal method.
2. Full valuation methods fully reprice the portfolio for each scenario encountered over a
historical period, or over a great number of hypothetical scenarios developed through
historical simulation or Monte Carlo simulation. Computing VaR using full revaluation is more
complex than linear methods. However, this approach will generally lead to more accurate
estimates of risk in the long run.
Linear Valuation: The Delta-Normal Valuation Method
The delta-normal approach begins by valuing the portfolio at an initial point as a relationship to a
specific risk factor, S (consider only one risk factor exists):
V0 = V(S0)
With this expression, we can describe the relationship between the change in portfolio and the
change in the risk factor as:
dV = Δ0 x dS
Here, Δ0 is the sensitivity of the portfolio to changes in the risk factor, S. As with any linear
relationship, the biggest change in the value of the portfolio will accompany the biggest change in
the risk factor. The VaR at a given level of significance, z, can be written as:
VaR = Δ0 x (zσS0)
Generally speaking, VaR developed by a delta-normal method is more accurate over shorter
horizons than longer horizons. Consider, for example, a fixed income portfolio. The risk factor
impacting the value of this portfolio is the change in yield. The VaR of this portfolio would then be
calculated as follows:
VaR = modified duration x z x annualized yield volatility x portfolio value
Since the delta-normal method is only accurate for linear exposures, non-linear exposures, such as
convexity, are not adequately captured with this VaR method. By using a Taylor series expansion,
- 47 -
convexity can be accounted for in a fixed income portfolio by using what is known as the deltagamma method. You will see this method in the Greeks. For now, just take note that complexity
can be added to the delta-normal method to increase its reliability when measuring non-linear
exposures.
Full Valuation: Monte Carlo and Historic Simulation Methods
The Monte Carlo simulation approach revalues a portfolio for a large number of risk factor
values, randomly selected from a normal distribution. Historical simulation revalues a portfolio
using actual values for risk factors taken from historical data. These full valuation approaches
provide the most accurate measurements because they include all nonlinear relationships and
other potential correlations that may not be included in the linear valuation models.
Comparing the Methods
The delta-normal method is appropriate for large portfolios without significant option-like
exposures. This method is fast and efficient. Full-valuation methods, either based on historical data
or on Monte Carlo simulations, are more time consuming and costly. However, they may be the
only appropriate methods for large portfolios with substantial option-like exposures, a wider range
of risk factors, or a longer-term horizon.
Delta-Normal Method
The delta-normal method (a.k.a. the variance-covariance method or the analytical method) for
estimating VaR requires the assumption of a normal distribution. This is because the method
utilizes the expected return and standard deviation of returns. For example, in calculating a daily
VaR, we calculate the standard deviation of daily returns in the past and assume it will be
applicable to the future. Then, using the asset’s expected 1-day return and standard deviation, we
estimate the 1-day VaR at the desired level of significance. The assumption of normality is
troublesome because many assets exhibit skewed return distributions (e.g., options), and equity
returns frequently exhibit leptokurtosis (fat tails).
When a distribution has “fat tails,” VaR will tend to underestimate the loss and its associated
probability. Also know that delta-normal VaR is calculated using the historical standard deviation,
which may not be appropriate if the composition of the portfolio changes, if the estimation period
contained unusual events, or if economic conditions have changed.
- 48 -
Example: Delta-normal VaR
The expected 1-day return for a $100,000,000 portfolio is 0.00085 and the historical standard
deviation of daily returns is 0.0011. Calculate daily value at risk (VaR) at 5% significance.
Answer:
To locate the value for a 5% VaR, we use the Alternative z-Table. We look through the body of the
table until we find the value that we are looking for. In this case, we want 5% in the lower tail, which
would leave 45% below the mean that is not in the tail. Searching for 0.45, we find the value
0.4505 (the closest value we will find). Adding the 2:-value in the left-hand margin and the z-value
at the top of the column in which 0.4505 lies, we get 1.6 + 0.05 = 1.65, so the 2-value coinciding
with a 95% VaR is 1.65. (Notice that we ignore the negative sign, which would indicate the value
lies below the mean.) Refer to Appendices for all tables.
VaR
= [Rp — (z)(σ)]/Vp
= [0.00085 -1.65(0.0011)]($100,000,000)
= -0.000965($100,000,000)
= -$96,500
The interpretation of this VaR is that there is a 5% chance the minimum 1-day loss is 0.0965%, or
$96,500. (There is 5% probability that the 1-day loss will exceed $96,500.)
Alternatively, we could say we are 95% confident the 1-day loss will not exceed $96,500.
Advantages of the delta-normal VaR method include the following:
• Easy to implement.
• Calculations can be performed quickly.
• Conducive to analysis because risk factors, correlations, and volatilities are identified.
Disadvantages of the delta-normal method include the following:
• The need to assume a normal distribution.
• The method is unable to properly account for distributions with fat tails, either because of
unidentified time variation in risk or unidentified risk factors and/or correlations.
• Nonlinear relationships of option-like positions are not adequately described by the delta-normal
method. VaR is misstated because the instability of the option deltas is not captured.
Historical Simulation Method
- 49 -
In the historical simulation approach, the VaR is “read” from a portfolio’s historical return
distribution by taking the historical asset returns and applying the current portfolio allocation to
derive the portfolio’s return distribution. The advantage of this method is that it does not assume
any particular form for the return distribution and is thus suitable for fat-tailed and skewed
distributions. A major shortcoming of this approach is that it assumes that past return distributions
are an accurate predictor of future return patterns.
The easiest way to calculate the 5% daily VaR using the historical method is to accumulate a
number of past daily returns, rank the returns from highest to lowest, and identify the lowest 5% of
returns. The highest of these lowest 5% of returns is the 1-day, 5% VaR.
Example: Historical VaR
You have accumulated 100 daily returns for your $100,000,000 portfolio. After ranking
the returns from highest to lowest, you identify the lowest six returns:
-0.0011, -0.0019, -0.0025, -0.0034, -0.0096, -0.0101
Calculate daily value at risk (VaR) at 5% significance using the historical method.
Answer:
The lowest five returns represent the 5% lower tail of the “distribution” of 100 historical returns. The
fifth lowest return (-0.0019) is the 5% daily VaR. We would say there is a 5% chance of a daily loss
exceeding 0.19%, or $190,000.
Advantages of the historical simulation method include the following:
• The model is easy to implement when historical data is readily available.
• Calculations are simple and can be performed quickly.
• Horizon is a positive choice based on the intervals of historical data used.
• Full valuation of portfolio is based on actual prices.
• It is not exposed to model risk.
• It includes all correlations as embedded in market price changes.
Disadvantages of the historical simulation method include the following:
• It may not be enough historical data for all assets.
• Only one path of events is used (the actual history), which includes changes in correlations and
volatilities that may have occurred only in that historical period.
• Time variation of risk in the past may not represent variation in the future.
• The model may not recognize changes in volatility and correlations from structural
changes.
- 50 -
• It is slow to adapt to new volatilities and correlations as old data carries the same weight
as more recent data. However, exponentially weighted average (EWMA) models can be
used to weigh recent observations more heavily.
• A small number of actual observations may lead to insufficiently defined distribution
tails.
Monte Carlo simulation is a more sophisticated probabilistic approach in which the portfolio VaR
is obtained numerically by generating a return distribution using a large number of random
simulations. A great advantage of Monte Carlo simulation is its flexibility because the risk factors
do not need to follow a specific type of distribution and the assets are allowed to be nonlinear.
Monte Carlo simulation, however, is more difficult to implement and is subject to more model risk
than historical simulations and delta-normal VaR.
Example:
A Monte Carlo output specifies the expected 1-week portfolio return and standard deviation as
0.00188 and 0.0125, respectively. Calculate the 1-week VaR at 1% significance.
Answer:
VAR
= [Rp — (z)(σ)] Vp
= [0.00188-2.33(0.0125)]($100,000,000)
= -0.027245($100,000,000)
= -$2,724,500
The manager can be 99% confident that the maximum 1-week loss will not exceed $2,724,500.
Alternatively, the manager could say there is a 1% probability that the minimum loss will be
$2,724,500 or greater (the portfolio will lose at least $2,724,500).
Advantages of the Monte Carlo method include the following:
• It is the most powerful model.
• It can account for both linear and nonlinear risks.
• It can include time variation in risk and correlations by aging positions over chosen horizons.
• It is extremely flexible and can incorporate additional risk factors easily.
•
Nearly
unlimited
numbers
of
scenarios
can
produce
well-described
Disadvantages of the Monte Carlo method include the following:
• There is a lengthy computation time as the number of valuations escalates quickly.
• It is expensive because of the intellectual and technological skills required.
- 51 -
distributions.
• It is subject to model risk of the stochastic processes chosen.
• It is subject to sampling variation at lower numbers of simulations.
Shortcomings of the VaR methodology.
An alternative definition for the VaR of a portfolio as the minimum amount that a portfolio is
expected to lose within a specified time period and at a given confidence level of α reveals a
crucial weakness.
The VaR has a “blind spot” in the α-tail of the distribution, which means that the possibility of
extreme events is ignored. The P&L distributions for two investments X and Y in the Figure below
have
the
same
VaR, but the P&L distribution of Y is riskier because it harbors larger potential losses.
Furthermore, the use of VaR in credit portfolios may result in increased concentration risk. The
VaR of an investment in a single risky bond may be larger than the VaR of a portfolio of risky
bonds issued by different entities. VaR is thus in contradiction with the key principle of
diversification, which is central to the theory and practice of finance.
5.3
STRESS TESTING.
Stress testing complements VaR by helping to address the blind spot in the α-tail of the
distribution. In stress testing, the risk manager analyzes the behaviour of the portfolio under
several extreme market scenarios that may include historical scenarios as well as scenarios
- 52 -
designed by the risk manager. The choice of scenarios and the ability to fully price the portfolio in
each situation are critical to the success of stress testing.
Steps in stress testing
1) Development of scenarios-Involving plausible extreme market movements (generate
scenarios)
- A key issue in stress testing is the choice of the scenarios that are used.
- It is important to generate credible worst-case scenarios that are related to portfolio positions.
- This is the most challenging part of scenario generation as most fundamentals impacting financial
risks are interrelated.
(2) Valuing the Portfolio under the scenarios.
- It involves marketing to market an institutions portfolio value using the worst-case market rates.
(3) Summarise the Results.
- The summary should show the expected level of market losses/ as each stress scenario and also
identity busy areas when losses will be concentrated.
Approaches to Scenario Analysis
1. Stressing individual variables
– Used when there is a large movement in one variable while other variables are held constant.
– There are large changes that are unlikely to be accurately estimated using the Greeks
2. Scenarios involving several variables
– Usually, when the market variable moves by significant magnitude it might distort movements in
other markets.
-
As such financial institution assesses a combination of scenarios where several variable
changes at the same time.
-
The common practice is the case extreme movements in market variable that have
occurred in the past.
3. Scenarios generated by senior management.
- 53 -
- History never repeats itself at least exactly. Possibly because market participants are aware of
the past crisis and try to avoid the same mistake.
- In many ways, the scenarios that are most useful in Stress Testing are those generated by senior
management.
- Management uses their understanding of the markets world politics, economic environment and
current global uncertainties; build plausible scenarios that could lead to large losses.
Reverse Stress Testing
-
Is the search for scenarios that led to large losses.
-
Is a risk management tool.
-
It can be used as a tool to facilitate brainstorming by the stress tests committee
-
It can uncover scenarios that would be disastrous for financial institutions but the senior
management has never come across.
-
E.g. a Zimbabwean consumer products producer with significant unhedged US$
demonstrated liabilities. You are particularly concerned about the stability of the local
currency (Zim $) e.g. a devaluation would make US$ liabilities expensive.
-
To stress test, proceed as follows.
Step 1. Scenario generation
-
Increases in the exchange rate increases liabilities.
-
Scenario generation: Key issue. Generation credible worst-case scenarios that are relevant
to portfolio positions in the most challenging magnitude of the movement of individual
market variable and their interrelationship.
-
Suppose an economist presents two potential events namely (1) A significant widening of
the trade deficit which points pressure on the local currency, interest rate and equity price.
(2) A narrowing of trade deficit which is the scenario in the local market.
Step 2 Revalue these positions
-
Revalue company’s financial position given new markets rates. The financial exposure
would include US$ and Zim$ assets liabilities as well as equity instruments.
Step 3 Summarise the results
-
Summarize the results in the form of a report. The summary should show the level of markmarket.
-
The summary should also show what the loss will be and also identify business areas
where the loss will be more concentrated.
- 54 -
-
The report should also include direct and indirect effects. Management action (hedging the
risk involved.)
-
The largest potential loss comes from the unhedged US$ liabilities.
-
This could be reduced through a forex-forward hedge or by buying a Put option on the
$Zim, $US exchange rate.
*Stress testing is an attempt to overcome the weakness of VaR.
- it involves estimating how the portfolio of a financial institution would perform under extreme
market moves.
*Reverse stress-testing
- it involves searching for scenarios that lead to a large loss
Disadvantages

The scenario generalised might be unreasonable e.g. it might involve two variables moving
in the opposite direction when it is known that they are almost invariably positively
correlated in stressed market condition.
Advantages

It can uncover scenarios that would be disastrous for the financial institution but had not
occurred to senior management.

It should naturally lead to a by- in idea of stress testing.
Some stress tests focus on particular market variables. Examples of stress tests that have been
recommended include:
1. Shifting a yield curve by 100 basis points
2. Changing implied volatilities for an asset by 20% of current values
3. Changing an equity index by 10%
4. Changing an exchange rate for a major currency by 6% or changing
the exchange rate for a minor currency by 20%
SUBJECTIVE VS OBJECTIVE PROBABILITY
Objective probability – is historical and is calculated by observing the frequency with which the
event happens over a defined period. Most objectives probabilities calculated in real life are usually
less reliable.
Subjective probability – is a probability derived from an individual’s personal judgement about the
chance of a particular event occurring. The probability is not based on historical data. It is a degree
of belief.
- 55 -
Different people are liable to have different subjective probabilities for the same event.
Thus the probabilities assigned to historical simulation are objective while the probabilities
assigned to the scenarios in the stress testing are subjective.
Expected Shortfall and Conditional VaR.
Expected shortfall (ES) and conditional VaR (CVaR), which are also called expected tail loss, are
two closely related risk measures that can be viewed as refinements of the VaR methodology
addressing the blind spot in the tail of the distribution.
Conditional VaR is the average of all the d-day losses exceeding the d-day (1 - α) VaR. Thus, the
CVaR cannot be less than the VaR, and the computation of the d-day (1 - α) VaR is embedded in
the calculation of the d-day (1 - α) CVaR.
The difference between the definition of the CVaR and the definition of expected shortfall is
tenuous. In fact, when the CDF is continuous, the expected shortfall and the CVaR will coincide. In
general, however, when the CDF is not continuous, as shown in the Figure below, the CVaR and
expected shortfall may differ
Shortcomings.
The main shortcoming of the CVaR and expected shortfall methodologies is that they only take into
account the tail of the distribution. Although computing the CVaR or expected shortfall is sufficient
- 56 -
if risk is narrowly defined as the possibility of incurring a large loss, it may not be enough to choose
between two investments X and Y because they may have the same CVaR or expected shortfall
but different shapes of distribution
Case Expectation
Computing worst case expectation can best be summarised by the figure below.
- 57 -
5.4
BACKTESTING
Whatever the method used for calculating VaR, an important reality check is back testing. It
involves testing how well the VaR estimates would have performed in the past. Suppose that we
have developed a procedure for calculating a one-day 99% VaR. Back testing involves looking at
how often the loss in a day exceeded the one-day 99% VaR calculated using the procedure for that
day.
Days when the actual change exceeds VaR are referred to as exceptions. If exceptions happen on
about 1% of the days, we can feel reasonably comfortable with the methodology for calculating
VaR. If they happen on, say, 7% of days, the methodology is suspect and it is likely that VaR is
underestimated.
From a regulatory perspective, the capital calculated using the VaR estimation procedure is then
too low. On the other hand, if exceptions happen on, say 0.3% of days it is likely that the procedure
is overestimating VaR and the capital calculated is too high.
One issue in back testing VaR is whether we take account of changes made in the portfolio during
the time period considered. There are two possibilities. The first is to compare VaR with the
hypothetical change in the portfolio value calculated on the assumption that the composition of the
portfolio remains unchanged during the time period. The other is to compare VaR to the actual
change in the value of the portfolio during the time period.
VaR itself is invariably calculated on the assumption that the portfolio will remain unchanged during
the time period, and so the first comparison based on hypothetical changes is more logical.
However, it is actual changes in the portfolio value that we are ultimately interested in.
In practice, risk managers usually compare VaR to both hypothetical portfolio changes and actual
portfolio changes. (In fact, regulators insist on seeing the results of back testing using actual as
well as hypothetical changes.) The actual changes are adjusted for items unrelated to the market
risk—such as fee income and profits from trades carried out at
prices different from the mid-market price.
Example:
Suppose that we back test a VaR model using 600 days of data. The VaR confidence level is 99%
and we observe nine exceptions. The expected number of exceptions is six. Should we reject the
model? The probability of nine or more exceptions can be calculated in Excel as
1 - BINOMDIST(8, 600, 0.01, TRUE)
It is 0.152. At a 5% confidence level, we should not therefore reject the model. However, if the
number of exceptions had been 12, we would have calculated the probability of 12 or more
exceptions as 0.019 and rejected the model. The model is rejected when the number of exceptions
- 58 -
is 11 or more. (The probability of 10 or more exceptions is greater than 5%, but the probability of
11 or more is less than 5%.)
Suppose again that we back test a VaR model using 600 days of data when the VaR confidence
level is 99% and we observe one exception, well below the expected number of six. Should we
reject the model? The probability of one or zero exceptions can be calculated in Excel as
BINOMDIST(l, 600, 0.01, TRUE)
It is 0.017. At a 5% confidence level, we should therefore reject the model. However, if the number
of exceptions had been two or more, we would not have rejected the model.
Suppose that as in the previous two examples we back test a VaR model using 600 days of data
when the VaR confidence level is 99%. The value of the statistic is greater that 3.84 when the
number of exceptions, m, is one or less and when the number of exceptions is 12 or more. We
therefore accept the VaR model when 2 =< m =< 11, and reject it otherwise.
Generally the difficulty of back testing a VaR model increases as the VaR confidence level
increases. This is an argument in favor of not using very high confidence levels for VaR.
5.5
A NEW CLASSIFICATION OF RISK MEASURES
So far we have identified a number of risk measures - some from the well-established investment
theory literature, some from the relatively new risk management literature, and some from the
investment industry’s intense interest in improving its own understanding and evaluation of risk.
In this section, the goal is to ascertain the properties of various risk measures and define a more
relevant classification than the triptych of measures from investment theory, measures from risk
management, and industry driven measures that has been used so far. A classification effort is
needed because half a century of developments in the theory and practice of finance has produced
a cornucopia of risk measures and raised a number of practical questions: Are all risk measures
equally “good” at estimating risk? If they are, then should some criteria exist that desirable risk
measures need to satisfy? Finally, should all market participants, traders, portfolio managers, and
regulators use the same risk measures? But these questions can only be answered after
developing an understanding of the risk measurement process because understanding the
measurement process helps develop important insights into specific aspects of the risk being
measured. After all, a man with a scale in his hands is more likely to be measuring weights than
distances. In the same spirit, understanding how risk is being measured by knowing the properties
of the risk measures being used will help in understanding not only the dimensions of risk being
captured but also the dimensions of risk left aside. In this section, risk measures are classified as
- 59 -
families or classes that satisfy sets of common properties. We will discuss four classes of risk
measures and explore how the risk measures introduced earlier fit in this new classification:
1. monetary risk measures,
2. coherent risk measures,
3. convex risk measures, and
4. spectral risk measures.
The Figure overleaf summarizes the relationships between the classes and measures.
Coherent Risk Measures.
They may be defined as the class of monetary risk measures satisfying the following four
“coherence” properties:
1. Monotonicity: If the return of asset X is always less than that of asset Y, then the risk of asset X
must be greater. This translates into
X ≤ Y in all states of the world ⇒ p(X ) ≥ p(Y ).
As an alternative to this first property, one could consider positivity—if an investment makes a
profit in every state of the world, then its risk cannot be more than 0, that is
X ≥ 0 ⇒ p(X) ≤ 0.
2. Subadditivity: The risk of a portfolio of assets cannot be more than the sum of the risks of the
individual positions. Formally, if an investor has two positions in investments X and Y, then,
p( X + Y ) ≤ ρ( X ) + ρ(Y ).
This property guarantees that the risk of a portfolio cannot be more (and should generally be
less) than the sum of the risks of its positions, and hence it can be viewed as an extension of
- 60 -
the concept of diversification introduced by Markowitz. This property is particularly important for
portfolio managers and banks trying to aggregate their risks among several trading desks.
3. Homogeneity: If a position in asset X is increased by some proportion k, then the risk of the
position increases by the same proportion k. Mathematically,
p(kX ) = kρ( X ).
This property guarantees that risk scales according to the size of the positions taken. This
property, however, does not reflect the increased liquidity risk that may arise when a position
increases. For example, owning 500,000 shares of company XYZ might be riskier than owning
100 shares because in the event of a crisis, selling 500,000 shares will be more difficult, costly,
and require more time. As a remedy, a number of scholars have proposed to adjust X directly
to reflect the increased liquidity risk of a larger position.
4. Translation invariance or risk-free condition: Adding cash to an existing position reduces the
risk of the position by an equivalent amount. For an investment with value X and an amount of
cash r,
p( X + r) = ρ( X ) - r.
Equipped with a definition of coherent risk measures, the following two questions can be
addressed: Is coherence necessary? And are the measures introduced earlier coherent?
Coherence is not necessary for all applications. Depending on whether one is a banker, portfolio
manager, or regulator, some of the properties will be more important than others. The obvious
example is subadditivity, which is primordial in portfolio management applications. Another
example would be translation invariance, which underpins the regulatory applications of a risk
measure.
Regarding the second question, standard deviation calculated using a distribution of asset returns
is not a monetary measure and, as a result, it cannot be coherent. Standard deviation calculated
using a P&L distribution is a monetary measure, but it is not coherent because it does not satisfy
the monotonicity property.
Value at risk is not coherent because it does not satisfy the subadditivity property. This sharp
contradiction with the principle of diversification should be of particular concern to a bank risk
manager who aims at aggregating the VaR of various desks to obtain an overall VaR for the bank’s
trading operations. Because the VaR fails to be subadditive, no assurance exists that the bank’s
VaR will reflect the diversification occurring among desks.
Thus, further widespread use of VaR, especially at the regulatory level, could result in a significant
increase in systemic risk. Furthermore, risk modeling in general and VaR in particular is not a
robust means of managing risk due to the volatility of risk measures and the limited guidance they
provide in times of crisis.
- 61 -
Expected shortfall is a coherent measure.
For conditional value at risk, when the P&L distribution is continuous, CVaR and ES coincide, and
as a result CVaR is coherent. When the P&L distribution is not continuous, CVaR is not coherent.
However, introducing a standardized α-tail cumulative density function does remedy this problem,
and ensures that CVaR remains a coherent measure even for discontinuous distribution functions.
Backtesting VaR Methodologies
Backtesting is the process of comparing losses predicted by the value at risk (VaR) model to
those experienced over the sample testing period. If a model were completely accurate, we would
expect VaR to be exceeded (this is called an exception) with the same frequency predicted by the
confidence level used in the VaR model. In other words, the probability of observing a loss amount
greater than VaR is equal to the significance level (x%). This value is also obtained by calculating
one minus the confidence level. For example, if a VaR of $10 million is calculated at a 95%
confidence level, we expect to have exceptions (losses exceeding $10 million) 5% of the time. If
exceptions are occurring with greater frequency, we may be underestimating the actual risk. If
exceptions are occurring less frequently, we may be overestimating risk.
There are three desirable attributes of VaR estimates that can be evaluated when using a
backtesting approach.
The first desirable attribute is that the VaR estimate should be unbiased. To test this property, we
use an indicator variable to record the number of times an exception occurs during a sample
period. For each sample return, this indicator variable is recorded as 1 for an exception or 0 for a
non-exception. The average of all indicator variables over the sample period should equal x% (i.e.,
the significance level) for the VaR estimate to be unbiased.
A second desirable attribute is that the VaR estimate is adaptable. For example, if a large return
increases the size of the tail of the return distribution, the VaR amount should also be increased.
Given a large loss amount, VaR must be adjusted so that the probability of the next large loss
amount again equals x%. This suggests that the indicator variables, discussed previously, should
be independent of each other. It is necessary that the VaR estimate accounts for new information
in the face of increasing volatility.
A third desirable attribute, which is closely related to the first two attributes, is for the VaR estimate
to be robust. A strong VaR estimate produces only a small deviation between the number of
expected exceptions during the sample period and the actual number of exceptions. This attribute
is measured by examining the statistical significance of the autocorrelation of extreme events over
the backtesting period. A statistically significant autocorrelation would indicate a less reliable VaR
- 62 -
measure. By examining historical return data, we can gain some clarity regarding which VaR
method actually produces a more reliable estimate in practice. In general, VaR approaches that
are nonparametric (e.g., historical simulation and the hybrid approach) do a better job at producing
VaR amounts that mimic actual observations when compared to parametric methods such as an
exponential smoothing approach (e.g., GARCH). The likely reason for this performance difference
is that nonparametric approaches can more easily account for the presence of fat tails in a return
distribution. Note that higher levels of λ (the exponential weighing parameter) in the hybrid
approach will perform better than lower levels of λ.
Finally, when testing the autocorrelation of tail events, we find that the hybrid approach performs
better than exponential smoothing approaches. In other words, the hybrid approach tends to reject
the null hypothesis that autocorrelation is equal to zero fewer times than exponential smoothing
approaches
Stress Testing
During times of crisis, a contagion effect often occurs where volatility and correlations both
increase, thus mitigating any diversification benefits. Stressing the correlation is a method used to
model the contagion effect that could occur in a crisis event.
One approach for stress testing is to examine historical crisis events, such as the Asian crisis,
October 1987 market crash, etc. After the crisis is identified, the impact on the current portfolio is
determined. The advantage of this approach is that no assumptions of underlying asset returns or
normality are needed. The biggest disadvantage of using historical events for stress testing is that
it is limited to only evaluating events that have actually occurred.
The historical simulation approach does not limit the analysis to specific events. Under this
approach, the entire data sample is used to identify “extreme stress” situations for different asset
classes. For example, certain historical events may impact the stock market more than the bond
market. The objective is to identify the five to ten worst weeks for specific asset classes and then
evaluate the impact on today’s portfolio. The advantage of this approach is that it may identify a
crisis event that was previously overlooked for a specific asset class.
The focus is on identifying extreme changes in valuation instead of extreme movements in
underlying risk factors. The disadvantage of the historical simulation approach is that it is still
limited to actual historical data. An alternative approach is to analyze different predetermined
stress scenarios. For example, a financial institution could evaluate a 200bp increase in short-term
rates, an extreme inversion of the yield curve or an increase in volatility for the stock market. As in
the previous method, the next step is then to evaluate the effect of the stress scenarios on the
current portfolio. An advantage to scenario analysis is that it is not limited to the evaluation of risks
that have occurred historically. It can be used to address any possible scenarios. A disadvantage
- 63 -
of the stress scenario approach is that the risk measure is deceptive for various reasons. For
example, a shift in the domestic yield curve could cause estimation errors by overstating the risk
for
a
long
and
short
position
and
understating
the
risk
for
a
long-only position.
Asset-class-specific risk is another disadvantage of the stress scenario approach. For example,
emerging market debt, mortgage-backed securities, and bonds with embedded options all have
unique asset class specific features such that interest rate risk only explains a portion of total risk.
Addressing asset class risks is even more crucial for financial institutions specializing in certain
products or asset classes.
Read on VaR versus Daily Earnings at Risk (DEAR)
DEAR AND
VaR.docx
- 64 -
6.0 MANAGING INTEREST RATE RISK
-
Interest risk is more difficult to manage than other risk arising from other market variable
e.g. exchange rate, commodity prices.
One complication is that they are many interest rates in any given economy or currency e.g.
TV rates, inter bank borrowing, lending rates e.t.c.
Another complication is that we need more than a single number to describe interest rate
(yield curve)
Question: Why is it more difficult to manage interest rate risk than risks from other
variables?
Management of Net Interest Income
Suppose we are using lending and borrowing rates:
Maturities
Deposit Rates
Mortgage
Interest Income
1
3%
6%
6% - 3%
5
3% increase 4%
6% increase 8%
Stable net – interest income
- It is the job of the asset – liability management to ensure that this type of interest rate risk is
minimised.
- One method of doing this is to ensure that maturities of the assets on which interest is
earned and maturities of liabilities on which rates paid are matched.
- When we increase the 5 year deposit rates to 4% and the 5 year mortgage rates to 8% the
maturities of assets and liabilities must be matched.
- If there is an imbalance and the 5 year deposits mature earlier than the 5 year mortgage,
there is repricing gap. The liabilities mature/reprice earlier that assets, the mortgages.
- This leads to long term rates being high than the short term rates to increase deposit
maturity and close out the gap
LIBOR – LONDON INTERBANK OFFERED RATE
- The rate at which bank is prepared to lend to another bank.
- Many loans to companies and governments have floating rates benchemarked to the
LIBOR e.g., LIBOR +1% depending on their credit worthiness
- The British Bank Association provides LIBOR quotes in different currencies for maturities of
1, 3, 6, and 12 months at 11:00 am every business day.
- Those rates are based on information provided to the BBA by large banks.
- A bank must satisfy a certain credit worthiness, criteria to quality for receiving LIBOR
deposits from other banks.
- Typically it must have an AA credit rating.
- 65 -
-
LIBOR rate are therefore the 1 month – 12 months borrowing rates for banks that have AA
credit rating.
Interest rates are normally quoted in terms of basis points
-
A 250 basis points is considered a small change ~ Interest rate.
0.01% Base point
100 basis points = 1%
Managing Fixed Income Portfolios
- A bond pays a coupon at 10%, the yield to maturity is 12% and the face value $1000. The
remaining maturity is 3 years and valuation is done at a coupon date. What is the value of
the bond?
Recall, a bond provides the holder with cash flows at time ti for i = 1,..., n. The price B and yield
y (continuously compounded) are related by
Adjusting from continuos compounding to annual compounding, the value of the bond, B, is
Bond value = 100/(1.12) + 100/(1.12)2 + 1100/(1.12)3 = ??
Using the annuity formulae
OR 100[(1.123 -1)/ (0.12 x 1.123)] + 1000/(1.12)3 = ??
YTM – A promised rate of return which will only be realised if the following conditions are met (if
the following risks do not occur)
1. Interest rate in the economy does not change i.e. the bond does not suffer from interest
rate risk.
2. The issuers of the bond does not default i.e. the bond does not suffer from default or credit
risk.
3. The bond is not redeemed before maturity i.e., if the bond does not have a call option.
The horizon date
-
The period at the end of which the investor wishes to liquidate his investments for instances
if we have invested in order to pay future option.
-
Is the date of future payments (horizon date).
- 66 -
Horizon date value
-
Is the value of the investment at the horizon date or at the end of investment period.
Realised Return
-
Is the actual return earned by the investor when the investment is liquid or solid at the
horizon date of the investor.
Interest Rate has two components
1. Interest reinvestment risk
2. Interest rate price risk.
Interest rate Reinvestment risk
-
This is the risk that when interest change coupon will be reinvested at different than
promised rate.
-
This risk affects the sum of interest on coupon.
Interest rate price risk
-
It is definitely affecting the (HDP)
-
It is the risk that when interest rate changes future cash flows of bond to maturity at the
horizon date will be discounted at a rate different from the promised ytm. This will affect the
HD price
(HDP)
Net reinvestment risk
Net price risk
Variance of realised ----------------------------------------------------------Return
Default to call risk
HD> duration
HD< duration
HD=D
Time (duration D)
NB* Duration of a bond measures the sensitivity of % changes in the bond price with respet to
changes in the yield. This is the average life of the bond.
Formula
- 67 -
But we know that
The B is the price of the bond, and
c is the cash flows at time ti for i = 1,..., n.
y is the yield (continuously compounded)
Therefore,
This expression maybe written as
Thus
Duration =
=
-
-
(-1/B)*(change in B/change in r)
│D = (1/B)*(dB/dr)│
Bond Price (B) is the present value of all the cash flow.
Duration is therefore a weighted average of the times when payments are made with the
weight applied to time (t) being equal to the proportion bond total P provided by the cash
flow at time (t).
Duration is a measure of how long the bond holder has to wait for cash flows.
Example
Consider a 3 year 10% coupon bond with the face value of $1000, suppose that the yield on the
bond is 12% per annum with continuous compounding, suppose coupon payments are made semi
annually. What is the duration of the bond?
Time(1)
Cash flow(2)
Present value (3)
Weight (4)
- 68 -
Time & weight (1x4)
0.5
50
47.9
0.0500
0.0025
1.0
50
44.35
0.0471
0.0471
1.5
50
41.76
0.0433
0.0665
2.0
50
39.33
0.0417
0.0835
2.5
50
37.04
0.0393
0.0983
3
1050
732.56
0.7776
2.3324
Total
1300
942.13
1
2.6531
Bond price adjustment following a small change in the ytm of a bond duration.
Change in B = -2.6531 * 942.13 * change in r
Change in B = -2499.57 change in r
Change in B = Change in B*Change in r
Question: When ytm increases by 10 basis points, how much would be the bond price
change.
100% basis points = 1%
10 basis points = 0.1% = 0.001
Change in B = -2499.57(0.001) = -2.50 it will decrease
Thus, the bond price will dcrease to 942.13 – 2.50 = 939.63
Accuray of Duration
What is B when ytm is =12.1%
Calculate using the ytm and check B
50.e(-0.121*0.5) + 50.e(0.121*10) + 50.e(0.121*1.5) + 50.e(0.121*2) + 50.e(0.121*2.5) +
50.e(0.121*3) = 939.63
-
Duration captures price changes for small changes in interest rate.
For large changes in interest rates, the bond price adjustment using duration is not perfect.
This is primarily because the relationship between interest rates and bond pries is nonlinear.
If y is expressed with annual compounding, it can be shown that the relationship between bond
pries and yield becomes
And more generally, if y is expressed with a compounding frequency of m times per year, then
- 69 -
A variable D* defined by
And sometimes referred to as the bond's modified duration allows the duration relationship to be
simplified to
Example
A bond has a price of 94.213 and a duration of 2.653. The yield, expressed with semiannual
compounding is 12.3673%.
The modified duration D* is 2.4985
Therefore,
When the yield (semiannually compounded) increases by 10 basis points, the duration relationship
predicts that we expect the bond price to fall by 235.39 x 0.001 = 0.235, so that the bond price
goes down to 94.213 - 0.235 = 93.978.
A zero coupon bond has a maturity of 5 years, face value of $1000. Assuming annual
compounding. Calculate the change in the price of a bond, its ytm shift from 10% to 12.2%.
Hint: use bond the bond price adjustment formulars and the long method.
B = (1000/1.1)5 = Bond price = 620,92
B after the shift in ytm
B = (1000/1.122)5 = $562. 39
Calculate the difference
Change in B = $562. 39 - $620. 92 = -58. 53
Using the Bond adjusted formular
Change in B = 5 * 620. 92
change in r = 2,2% or 0.022
Change in B = -3104* (6*change in r) = -68. 30
-
There is a significant variance btween -68.30 and -58.53.
The problem emanates from the formular itself. It assumes linearly related whilst yield and bond
prices are nonlinear.
There is need to take into cognisance the curvature of the relationship
Convexity
- 70 -
convexity measures this curvature and can be used to improve the relationship between changes
in yields and changes in bond prices, previously captured by duration.
The numerator on the RHS shows thad convexity is the weighted average of the square of the time
to the receipt of cash flows.
When we apply Tylor’s series expansion to the above equation, we get
Example: fund for retired employees has the following obligation:
Portfolio Immunization
A portfolio consisting of long and short positions in interest-rate sensetive assets and liabilities can
be protected against relatively small parallel shifts in the yield curve by ensuring that its duration is
zero.
In addition, It can be protected against relatively large parallel shifts in the yield curve by ensuring
that its duration and convexity are both zero or close to zero.
Example.
Consider a fund with the following obligations over the next years.
Year
Obligation(million)
Year
Obligation(million)
1
5
6
3
2
5
7
3
3
4
8
2
4
4
9
2
5
4
10
0
Let’s assume that the obligations are not indexed to the rate of inflation. We can immunise this
portfolio against interest rate risk in one of the following ways:
1. Duration making each future obligation
- 71 -
2. They can calculate duration of our future obligation and purchase a portfolio of bond
whose duration is equal to the duration of our liabilities.
3. You can immunise by dedicated portfolio, also known as cash flow matching.
Using method 2
Assuming that the yield to maturity is 10% the future obligation can be a portfolio of zero coupon
bonds
where
xi weight of each bond
Di Duration of the ith bond
Dp portfolio Duration
Dp = 3.7 (double clik the excel table below to follow through the process of computation)
Year
Obligation
1
2
3
4
5
6
7
8
9
10
Total
5
5
4
4
4
3
3
2
2
0
PV of Obligation Weight
Duration
4.55
0.2074
0.2074
4.13
0.1886
0.3772
3.01
0.1371
0.4114
2.73
0.1247
0.4987
2.48
0.1133
0.5667
1.69
0.0773
0.4637
1.54
0.0703
0.4918
0.93
0.0426
0.3406
0.85
0.0387
0.3484
0.00
0.0000
21.91
1.0000
3.7059
The duration of our liabilities which is effectively the liquidation of the person fund is 3.7 years. To
immunise this liability series a interest rate risk (against small and parallel shift in the yield curve)
the portfolio of bonds with 3.7 years duration must be purchased
NB* this apply only if the future obligations are nominal such as payment to retired employees
which are not indexed to inflation. For large changes in interest both the duration as well as
convexity of the bond should match the pension obligation.
Using method 3
This method involves the least cost portfolio of bonds that generates a stream of future cash flows
that exactly match the stream of future obligation.
To illustrate how it works:
- 72 -
Year
Obligation
Bonds
Coupons % rate
Current price
Ma
1
100 000
A
10
900
1
2
200 000
B
15
1000
1
3
400 000
C
12
950
2
4
-
D
8
1100
3
E
7
1200
4
All bonds have a face value of $1000
Step 1
We find the bond with same maturity as the furthest liability, furthest liability 400 000 – 3yrs.
If there are more than 1 such bonds choose one with least cost.
- The number of bonds D that we should purchase in order to generate $400 000 =obligation
400 000 (face value to capital)
- No of bonds D = 400000/(1000 + 80 (8%x face value)) = 370. 37 = 371.
Step number 2
- Look for the next furthest obligation taking into consideration the coupons paid by the first
bond.
- Sum of coupons that we are going to get from the first bond (c) = 371 x 80 = 29680
- Find bond with the same maturity as the penalty bond (next furthest obligation)
- No of bond c = (200000 – 29680)/ 1120 = 153.
- ∑(c) = 153 X 120 = $ 18 360
-
STEP 3
A is the least cost
No of bond (A) = (100 000 - 18360 - 29680) / 1100 = 48
Cost = 48 x 900 = 43 200
Payment
1
2
3
Obligation
100 000
200 000
400 000
Pay
(+)
(+)
Bond D
29680
29680
40680
Bond C
18360
171360
-
Bond A
52800
1040
680
840
- 73 -
The 3 bond immunisation strategies are passive bond management strategies which fall under the
asset liability management strategies whose objective is not to generate positive returns but to
ensure that returns of assets are not less than returns on liabilities to be paid by the assets in the
future.
Option Trading strategies
Refer to the above document.
The main emphasis here is on the Basics of put options and call options and Covered calls and
protective puts
- 74 -
Basic definitions
A call option gives the holder the right to buy the underlying asset by a certain date for a certain
price
Put Option An option to sell an asset for a certain price by a certain date.
A Covered Call is A short position in a call option on an asset combined with a long position in the
underlying asset.
Example: Covered call
An investor purchases a stock for So = $43 and sells a call for Co = $2.10 with a strike price,
X = $45.
(1) Show the expression for profit and compute the maximum profit and loss and the
breakeven price.
(2) Compute the profits when the stock price is So, S35, S40, $45, $50, and $55.
From the notes we know that
proit = -max(0, ST - X) + S.1 - So + Co
maximum proit = X + Co — S0
maximum loss = So—Ca
breakeven price = So—Co
So = initial stock price paid
profit
= -max(0, S.1 - X) + ST- So + Co
= -max(0, S.1 - $45) + ST- $43 + $2.10
maximum profit
= X + Co - So
= $45.00 + $2.10 - $43.00 = $4.10
maximum loss
= So - Co
= $43.00 - $2.10 = S40.90
breakeven price
= So - Co
= $43.00 - $2.10 = S40.90
- 75 -
The computations below shows profit calculations at the various stock prices.
Covered Call Profits
$0:
-max(0, $0 - $45) + $0
- $43.00 + $2.10 = -$40.90
S35:
-max(0, $35 - $45) + $35.00 - $43.00 + $2.10 = -$5.90
$40:
-max(0, $40 – $45) + $40.00 - $43.00 + $2.10 = -$0.90
S45:
-max(0, $45 - $45) + $45.00 - $43.00 + $2.10 = - $4.10
$50:
-max(0, $50 – $45) + $50.00 - $43.00 + $2.10 = $4.10
S55:
-max(0, $55 – $45) + $55.00 - $43.00 + $2.10 = $4.10
The characteristics of a covered call are that the sale of the call adds income to the position at a
cost of limiting the upside gain. It is an ideal strategy for an investor who thinks the stock will
neither go up nor down in the near future.
As long as the ST > So - Co ($40.90 in the preceding example), the investor benefits from the
position.
Task: Attempt plotting the payoff diagram
A protective put (also called portfolio insurance or a hedged portfolio) is constructed by holding a
long position in the underlying securiy and buying a put option
Example: Protective put
An investor purchases a stock for So = $37.50 and buys a put for Po = $1.40 with a strike price, X
= $35.
(1) Demonstrate the expressions for the profit and the maximum profit and compute
the maximum loss and the breakeven price.
(2) Compute the profits for whcn the price is $0, $30, $35, $40, $45, and $50.
proit
= max(0, X — ST) + ST — So — Po
= max(0, $35 — S ) + SI. — $37.50 — $1.40
maximum proit = S1. — So — Po
= $37.50 — $1.40 = S. — $38.90
maximum loss = So — X + Po
= $37.50 — $35.00 + $1.40 = $3.90
brcakcvcn price = So + Po
= $37.50 + $1.40
- 76 -
= $38.90
Protective Put Profits
profit
= max(0, X - ST) + St - So- Po
$0: max(0, $35 - $0) + $0 - $37.5 - $1.40 = -$3.90
$30: max(0, $35 - $30) + $30.00 - $37.5 - $1.40 = -$3.90
$35: max(0, $35 - $35) + $35.00 - $37.5 - $1.40 = -$3.90
$40 max(0, 535 - $40) + $40.00 - $37.5 - $1.40 = - $1.10
$45 max(0, $35 - $45) + $45.00 - $37.5 - $1.40 = $6.10
$50 max(0, $35 - $50) + $50.00 - $37.5 - $1.40 = $11.10
Read on
Bull spreads, Bear spreads, butterfly spreads, collars straddles and box spreads
- 77 -
6
GROUP ASSIGNMENTS
Assess the effectiveness of various bank regulations as a risk management tool in Zimbabwe.
Outline the capital requirement of banks under Basel II and comment on the differences between
Basel II and Basel 2.5
What are the implications of Basel III for capital, liquidity, and profitability of banks?
IFRS 9
Liquidity Risk Management in Zim’s commercial banks. Are we in line with international best
practices
- 78 -
- 79 -
APPENDICES
- 80 -
- 81 -
Download