advanced risk management i lecture 5 value at risk & co

38
Advanced Risk Management I Lecture 5 Value at Risk & co.

Upload: hilda-poole

Post on 28-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Advanced Risk Management I Lecture 5 Value at Risk & co

Advanced Risk Management I

Lecture 5

Value at Risk & co.

Page 2: Advanced Risk Management I Lecture 5 Value at Risk & co

Map of the exposures

• Equity– Country– Sector

• Bond– Currency and bucket– Issuer (rating class) and bucket

• Foreign exchange– Value of the exposures in foreign currency

Page 3: Advanced Risk Management I Lecture 5 Value at Risk & co

Profit and loss

• Define, at time t, for a given market,– A set of maturities t1,t2,…tn

– A set of nominal cash-flows c1,c2,…cn – A set of discount factors P(t,t1),P(t,t2)…P(t,tn)

• The mark-to-market value at time t is

V(t) = c1P(t,t1)+ c2P(t,t2)+ …+cn P(t,tn)• At time t+, te mark-to-market is

P(t+,ti)=(1+ri) P(t ,ti) for every i, so that

V(t+)-V(t) = c1r1P(t,t1)+ c2r2P(t,t2)+ …+cnrn P(t,tn)

Page 4: Advanced Risk Management I Lecture 5 Value at Risk & co

Risk measurement

• The key problem for the construction of a risk measurement system is then the joint distribution of the percentage changes of value r1, r2,…rn.

• The simplest hypothesis is a multivariate normal distribution. The RiskMetrics™ approach is consistent with a model of “locally” normal distribution, consistent with a GARCH model.

Page 5: Advanced Risk Management I Lecture 5 Value at Risk & co

Risk measurement methodologies• Parametric approach: assume a distribution

conditionally normal (EWMA model ) and is based on volatility and correlation parameters

• Monte Carlo simulation: risk factors scenarios are simulated from a given distributon, the position is revaluated and the empirical distribution of losses is computed

• Historical simulation: risk factors scenarios are simulated from market history, the position is revaluated, and the empirical distribution of losses is computed.

Page 6: Advanced Risk Management I Lecture 5 Value at Risk & co

Value-at-Risk

• Define Xi = riciP(t,ti) the profit and loss on bucket i. The loss is then given by –Xi. A risk measure is a function (Xi).

• Value-at-Risk:

VaR(Xi) = q(–Xi) = inf(x: Prob(–Xi x) > )

• The function q(.) is the level quantile of the distribution of losses (Xi).

Page 7: Advanced Risk Management I Lecture 5 Value at Risk & co

VaR as “margin”

• Value-at-Risk is the corresponding concept of “margin” in the futures market.

• In futures markets, positions are marked-to-market every day, and for each position a margin (a cash deposit) is posted by both the buyer and the seller, to ensure enough capital is available to absorb the losses within a trading day.

• Likewise, a VaR is the amount of capital allocated to a given risk to absorb losses within a holding period horizon (unwinding period).

Page 8: Advanced Risk Management I Lecture 5 Value at Risk & co

VaR as “capital”

• It is easy to see that VaR can also be seen as the amount of capital that must be allocated to a risk position to limit the probability of loss to a given confidence level.

VaR(Xi) = q(–Xi) = inf(x: Prob(–Xi x) > ) = inf(x: Prob(x + Xi > 0) > ) =

= inf(x: Prob(x + Xi 0) 1 – )

Page 9: Advanced Risk Management I Lecture 5 Value at Risk & co

VaR and distribution

• Call FX the distribution of Xi. Notice that

• FX(–VaR(Xi)) = Prob(Xi –VaR(Xi))

= Prob(– Xi >VaR(Xi)) = Prob(– Xi > F–X

–1())

= Prob(F–X (– Xi ) > ) = 1 – • So, we may conclude

Prob(Xi –VaR(Xi)) = 1 –

Page 10: Advanced Risk Management I Lecture 5 Value at Risk & co

VaR methodologies

• Parametric: assume profit and losses to be (locally) normally distributed.

• Monte Carlo: assumes the probability distribution to be known, but the pay-off is not linear (i.e options)

• Historical simulation: no assumption about profit and losses distribution.

Page 11: Advanced Risk Management I Lecture 5 Value at Risk & co

VaR in a parametric approach

• pi=ciP(t,ti) marking-to-market of cash flow i

ri, percentage daily change of i-th factor

Xi, profits and losses piri

• Example: ri has normal distribution with mean i and

volatility i, Take = 99%

Prob(ri i – i 2.33) = 1%

If i = 0, Prob(Xi = ri pi – i pi 2.33) = 1%

VaRi = i pi 2.33 = Maximum probable loss (1%)

Page 12: Advanced Risk Management I Lecture 5 Value at Risk & co

Volatility estimation

• Volatility estimation is the key issue in the parametric approach

• Choice of the information: implied and historical

• Measurement risk

• Model risk

Page 13: Advanced Risk Management I Lecture 5 Value at Risk & co

Volatility information

• Historical volatility– Pros: historical info available for a large set of markets

– Cons: history never repeats itself in the same way

• Implied vol– Pros: forward looking

– Cons: available for a limited number of markets

Page 14: Advanced Risk Management I Lecture 5 Value at Risk & co

Measurement risk

• Estimation risk of volatility can be reduced using more information on– Opening and closing prices – Maximum and minimum price in the perido

• Estimators: i) Garman and Klass; ii) Parkinson; iii) Rogers and Satchell; iv) Yang and Zhang

Page 15: Advanced Risk Management I Lecture 5 Value at Risk & co

Estimation risk (1)

• Oi and Ci are opening and closing prices of day i respectively

• Hi and Li are the highest and lowest prices of day i.

• Parkinson:

• GK

T

i

iiP

LH

T 1

22

2ln4

T

i

iiiiP f

LHa

f

COa

T 1

2212

2ln411

Page 16: Advanced Risk Management I Lecture 5 Value at Risk & co

Estimation risk (2)

• Define: oi = Oi – Ci-1, hi = Hi – Oi, li = Li – Oi, ci = Ci – Oi. Moreover, 2

o and 2c are

variances computed with opening and closing prices respectively

• Rogers-Satchell :

• Yang-Zang:

T

iiiiiiiRS cllchh

T 1

2 1̂

2222 ˆ1ˆˆˆ RSCOYZ kk

Page 17: Advanced Risk Management I Lecture 5 Value at Risk & co

Estimation risk (3)

• Parkinson: 5 times more efficient– Mean return = 0; “opening jump” f = 0

• Garman and Klass: 6 times more efficient– Mean return = 0; “opening jump” f 0

• Rogers and Satchell:– Mean return 0; “opening jump” f = 0

• Yang and Zhang: – Mean return 0; “opening jump” f 0

Page 18: Advanced Risk Management I Lecture 5 Value at Risk & co

Example: Italian blue chips

41000

42000

43000

44000

45000

46000

47000

26/09/00

27/09/00

28/09/00

29/09/00

30/09/00

01/10/00

02/10/00

03/10/00

04/10/00

05/10/00

06/10/00

07/10/00

08/10/00

09/10/00

10/10/00

11/10/00

12/10/00

13/10/00

Page 19: Advanced Risk Management I Lecture 5 Value at Risk & co

ResultsData h l o c (h - l)^2/(4*ln2) h(h-c)+l(l-c)

27/09/00 1.57304% 0.00000% -1.08562% 0.90659% 0.00892% 0.01048%28/09/00 0.00000% -1.42275% 0.39039% -0.79109% 0.00730% 0.00899%29/09/00 0.24260% -0.97856% 0.26074% -0.81075% 0.00538% 0.00420%02/10/00 1.87499% 0.00000% -0.34253% 1.72715% 0.01268% 0.00277%03/10/00 0.72456% -0.48452% -0.06747% 0.11533% 0.00527% 0.00732%04/10/00 0.59880% -0.67934% -0.42497% 0.19419% 0.00589% 0.00836%05/10/00 0.15623% -0.49416% 0.37639% -0.19563% 0.00153% 0.00202%06/10/00 0.17383% -1.60912% 0.05003% -1.45675% 0.01147% 0.00529%09/10/00 0.00222% -1.25356% -0.45117% -0.97338% 0.00569% 0.00353%10/10/00 0.29532% -0.77015% 0.65145% -0.12238% 0.00409% 0.00622%11/10/00 0.12797% -1.77694% -0.89458% -1.27056% 0.01309% 0.01079%12/10/00 0.54148% -2.07786% 0.56945% -1.48141% 0.02475% 0.02335%13/10/00 3.39528% 0.00000% -1.41515% 3.39528% 0.04158% 0.00000%

Volatilità Apertura Chiusura Parkinson Rogers-Satchell0.65599% 1.40101% 1.06566% 0.84726%

Numero Osservazioni 13 Yang-Zhang k=0.1875 1.17542%

Page 20: Advanced Risk Management I Lecture 5 Value at Risk & co

Model risk

• Beyond estimation risk, it may happen that volatility itself may change in time, making the distribution non normal.

• Garch models: shocks reaching the return change the volatility of the nexgt period return.

• Stochastic volatility models: volatility may depend on other variables than the return itself.

Page 21: Advanced Risk Management I Lecture 5 Value at Risk & co

Blue chips volatility

0

0.05

0.1

0.15

0.2

0.25

03/03/95

03/05/95

03/07/95

03/09/95

03/11/95

03/01/96

03/03/96

03/05/96

03/07/96

03/09/96

03/11/96

03/01/97

03/03/97

03/05/97

03/07/97

03/09/97

03/11/97

03/01/98

03/03/98

03/05/98

03/07/98

03/09/98

03/11/98

03/01/99

03/03/99

03/05/99

03/07/99

03/09/99

03/11/99

03/01/00

Page 22: Advanced Risk Management I Lecture 5 Value at Risk & co

Garch(p,q) models

• Conditional distribution of the returns is normal, but volatility changes in time following an autoregressive process of the ARMA(p,q) kind. For example, the Garch(1,1) model is:

2t

2tt

tttt N~ R

11112

,0

Page 23: Advanced Risk Management I Lecture 5 Value at Risk & co

Garch: ABC…

• In a Garch model the unconditional distribution NON of returns is not normal, and in particular is leptokurtic (“fat-tails”): extreme events are more likely than under the normal distribution

• In a Garch model the future variance is forecasted recursively with the formula

• The degree of persistence is given by 1 + 1 1 2

itit 1112 ˆˆ

Page 24: Advanced Risk Management I Lecture 5 Value at Risk & co

A special Garch…

• Assume: = 0 and 1 + 1 = 1. This is integrated Garch (Igarch) without drift:– i) volatility is persistent: every shock remains

in the history of volatility forever– ii) the best predictor of time t + i volatility is t

+ i – 1 volatility

– iii) time t volatility is given by ( 1)

2t

2tt 11

2 1

Page 25: Advanced Risk Management I Lecture 5 Value at Risk & co

…called EWMA• Notice that IGarch(1,1) with = 0 is the same as a

model in which volatility is updated with a moving average with exponentially decaying weights (EWMA).

• The model, with parameter = 0.94, is employed by RiskMetrics™ to evaluate volatility and correlations.

• The model corresponds to an estimate of volatility that weights the more recent observations (the parameter corresponds to giving positive weights to the last 75 observations)

Page 26: Advanced Risk Management I Lecture 5 Value at Risk & co

Volatility estimates

Page 27: Advanced Risk Management I Lecture 5 Value at Risk & co

Ghost feature

• Tuning the weights of the EWMA option allows to reduce the relevance of a phenomenon called the ghost feature.

• Ghost feature: a shock continues to affect with the same weight the VaR estimate for all the period it remains in the sample, and when it exits from the sample, the VaR estimate changes with no apparent motivation.

Page 28: Advanced Risk Management I Lecture 5 Value at Risk & co

Cross-section aggregation

• Once the Value-at-Risk is computed for every factor and position, the measure is aggregated across factors or across different business units.

• Aggregation is performed according to two methods– Undiversified VaR: the algebraic sum of individual

VaR values

– Diversified VaR: quadratic sum computed with correlation matrix C.

Page 29: Advanced Risk Management I Lecture 5 Value at Risk & co

Cross-section aggregation: diversified VaR

TdCd VaR

1

1

1

1

N

N

C

),...,,( 21 NVaRVaRVaRd

N

T

VaR

VaR

VaR

2

1

d

Page 30: Advanced Risk Management I Lecture 5 Value at Risk & co

VaR: temporal aggregation

• Aggregating VaR for different uwinding periods require assumption concerning the dynamic process describing losses

• The relationship is

• Notice: the relationship is based on the assumption that: – i) shocks are not serially correlated – ii) the portfolio does not change during the unwinding

period

VaRDaily period unwinding VaR

Page 31: Advanced Risk Management I Lecture 5 Value at Risk & co

Example

• Position: 1 mil. euros on Italian equity and 0.5 mil. euros on US equity. Stocks on the US market are denominated in dollars.

• Exposure: 1 000 000 Euro Italy equity

500 000 Euro US equity

500 000 Euro US/Euro exchange rate risk

Page 32: Advanced Risk Management I Lecture 5 Value at Risk & co
Page 33: Advanced Risk Management I Lecture 5 Value at Risk & co

Value-at-Risk validation

• Once one has built a system for the computation of VaR, how to test its effectiveness?

• A possible strategy is to verify how many times in past history losses have been higher than the VaR measure computed for the corresponding periods.

• These are called validation procedures (or backtesting)

Page 34: Advanced Risk Management I Lecture 5 Value at Risk & co

What P&L is used for validation?

• Notice again the difference between market price and marked-to-market price

• Since VaR has the goal of evaluating the marked-to-market loss of a position, the validation procedure must be carried out on the same concept of value

• Since market prices are determined by other elements than the mark-to-market (liquidity factors and others), it would be mistake to evaluate the VaR measure directly on market losses.

Page 35: Advanced Risk Management I Lecture 5 Value at Risk & co

Kupiec test

• A statistical test, suggested by Kupiec, is based on the hypothesis that losses exceeding VaR be indipendent.

• Based on this hypothesis one may compare the number x of episodes of exceeding losses out of a sample on N cases, and the binomial distribution with probability

xNx

x

NxP

1)(

Page 36: Advanced Risk Management I Lecture 5 Value at Risk & co

Likelihood ratio

• The test is simply given by the ratio of the probability to extract x excess losses from the binomial distribution with respect to the theoretical probability.

• The test, which is distributed as chi-square with one degree of freedom, is

xNx

xNx

N

x

N

xLR 1ln1ln2

Page 37: Advanced Risk Management I Lecture 5 Value at Risk & co

Example

• In applications one typically takes one year of data and a 1% confidence interval

• If we assume to count 4 excess losses in one year,

• Since the value of the chi-square distribution with one degree of freedom is 6.6349, the hypothesis of accuracy of the VaR measure is not rejected ( p-value of 0.77 è 38,02%).

77.099.001.0ln250

246

250

4ln2 2464

2464

LR

Page 38: Advanced Risk Management I Lecture 5 Value at Risk & co

Christoffersen extension

• A flaw of Kupiec test isnbased on the hypothesis of independent excess losses.

• Christoffersen proposed an extension taking into account serial dependence. It is a joint test of the two hypotheses.

• The joint test may be written asLRcc = LRun + LRind

where LRun is the unconditional test and LRind is that of indipendence. It is distributed as a chis-square with 2 degrees of freedom.