# So I finally understood Monty Hall

Lately I have been binge-watching Mythbusters, and one of the more curios myths they took on was the Monty Hall problem. The Monty Hall problem is named after a US TV show, were the candidate had the chance to win whatever price was behind one of three doors, where the other two doors had no price. The twist is that after the candidate choose, the moderator would show what was behind one of the other two doors, obviously one, where no price is, and the candidate now had the chance to switch the door.

Now, intuitively one would say that being shown what is behind a door will not change the chances, and the candidate has a 1 in 3 chance to win the price. Now the myth is, that switching the door will increase the chance to win substantially.

One might say, this is not really a myth, as it can be shown statistically to be true. But I am bad at combinatoric, so after seeing in Mythbusters how far ahead the switching strategy is, I wanted to redo their experiment as a Monte Carlo simulation.

First, we set up the experiment, and sample the winning doors, and the initial selection by the candidate.

# monty hall problem

n = 100000

prices = sample(3,n,1)

selected = sample(3,n,1)

df = data.frame(prices = prices,
selected = selected,
shown = NA,
wins_stay = NA,
wins_switch = NA)


##   prices selected shown wins_stay wins_switch
## 1      2        1    NA        NA          NA
## 2      3        2    NA        NA          NA
## 3      3        2    NA        NA          NA
## 4      3        2    NA        NA          NA
## 5      1        1    NA        NA          NA
## 6      2        3    NA        NA          NA


Next, we define how the moderator has to choose, which door to show in each case. And this is the first hint to why the likelihood to win is higher if the candidate switches: We need to differ between the cases were the candidate chose the winning door or not, because in the case of the candidate choosing a losing door, the door to be opened by the moderator is predetermined – it’s the one which is not winning.

shown = apply(df, 1, function(x){
x = unlist(x)

# x[1] - winning door, x[2] - choosen door
# candidate choose winning door

if(x[1]==x[2]){
return(sample((1:3)[-x[1]],1))
} else {
return((1:3)[-c(x[1], x[2])])
}
})
df$shown = shown head(df)  ## prices selected shown wins_stay wins_switch ## 1 2 1 3 NA NA ## 2 3 2 1 NA NA ## 3 3 2 1 NA NA ## 4 3 2 1 NA NA ## 5 1 1 2 NA NA ## 6 2 3 1 NA NA  Next, we calculate the winning likelihood, if the candidate always stays with the initial selection selected_stay = selected df$wins_stay = prices == selected_stay

##   prices selected shown wins_stay wins_switch
## 1      2        1     3     FALSE          NA
## 2      3        2     1     FALSE          NA
## 3      3        2     1     FALSE          NA
## 4      3        2     1     FALSE          NA
## 5      1        1     2      TRUE          NA
## 6      2        3     1     FALSE          NA

sum(df$wins_stay)/n  ## [1] 0.33196  It’s not very surprising that the percentage is 1 in 3, which is the initial likelihood without any additional information. Finally, we have to compute the door the candidate chooses if he switches. selected_switch = apply(df,1,function(x){ x = unlist(x) (1:3)[!(1:3)%in%c(x[2], x[3])] }) df$wins_switch = prices == selected_switch

##   prices selected shown wins_stay wins_switch
## 1      2        1     3     FALSE        TRUE
## 2      3        2     1     FALSE        TRUE
## 3      3        2     1     FALSE        TRUE
## 4      3        2     1     FALSE        TRUE
## 5      1        1     2      TRUE       FALSE
## 6      2        3     1     FALSE        TRUE

sum(df\$wins_switch)/n

## [1] 0.66804


Following the switching strategy, the candidates chances are 2 in 3, which counter-intuitively is quite logical: The candidate will loose in each case where his initial selection was correct (1 in 3), but will win in each case where his initial selection was wrong (2 in 3).

Oh, and here is a nice clip explaining it much better:

# What happens with my AAA-rated bond portfolio in the next 30 years?

One question a client asked was how rating migrations would effect their portfolio in the long term, and how to adjust the asset allocation to keep a stable average rating. As a small demo and proof of concept, I wrote a small shiny app. It allows you to adjust the portfolio weights based on the rating categories, the duration of the portfolio, and  the growth rate of the portfolio.

# Going off the peg – the Swiss case

So, the Swiss National Bank dropped its peg of the CHF against the EUR of 1.20, to the complete surprise of … well, anbydody, I suppose. As a result, the EUR dropped to lows of 0.975, according to yahoo, and remaining at 0.99 at pixel time.

The Swiss case is different from the normal case of a central bank stopping to defend a peg, because usually it has to defend against a depreciation of its currency, usually by selling its foreign reserves. Of course, such a defense is not sustainable in the long run, as foreign reserves are finite, if the fundamentals favor a depreciation.

Theoretically, these mechanisms have been well understood since the papers from Krugman on currency crises and later by Obstfeld on multiple equilibria, which explain speculative effects in a grey zone of so-so fundamentals, which could, for example, be effected by contagion. The practical results were very visible in the Asian crisis of 1997, and but for the political costs of leaving the EUR, we would have witnessed it in Europe in the last couple of years.

This is a matter close to my heart, as I wrote my diploma thesis on this subject, in an attempt to develop an early warning indicator. But even if that indicator would be still updated, and worked reasonably well, it would have missed the Swiss case – it was looking for depreciations against the central currency. But the SNB defended against appreciation, and theoretically, it should have been able to do so indefinitely, as it could just have kept on selling CHF against EUR.

Another indicator apparently also proved desastrous for a number of retail FX brokers – Value at Risk based on historical volatility. At least one broker declared insolvency, another is talking with regulators and investors about raising fresh capital, a number halted trading CHF. Here is an overview.

With a peg, of course the volatility would decline continously, lowering VaR. This allowed these brokers to decrease the required margin for their retail traders, or increase their leverage, with many brokers requiring only 2% margin. Those trades of course are completely underwater, and unless the traders answer the margin calls, the brokers have serious liquidity shortfall.

So, why did the SNB act like it acted? I can only speculate, of course, but here are some points whih I think are at least noteworthy:

• While having a positive current account overall, it’s current account with the EU is negative, therefore appreciation would help closing this gap.
• In a similar way, pegging to the EUR lead to depreciation against the USD and other major currencies, which might have lead to the impression that the pressure to appreciate the CHF against the trade wheighted basket has vanished to a manageable degree.
• Expectations of European quantitative easing would weaken the EUR further, accelerating the growth of the SNB’s balance sheet, which might have been seen as a political liability.  This I don’t find very convincing, as the SNB is independent from direct government intervention, and a referendum to prescribe a minimum holding of gold as reserves just lost a popular vote, strengthening its independence further.

# On Russia

The russion CB hiked the policy rate tonight form 10.5% to 17.0%. In the market this lead to a nearly parallel shift of the yield curve with steepening in the 1-5 year range.

For me, this leads to some question:

• How large is the duration mismatch of the Russian banks?
• Assuming it is not trivial, how long will the banks survive the inverted yield curve?

The macro problems of Russia show all the symptoms of dutch disease – a strong ressource sector leading to real appreciation of the Ruble, making the rest of the economy less competetive. The current abrupt reversal following the decline in oil prices kills the forex inflow on which domestic consumption of imports relied, making the interest rake hike necessary. This finally kills investments in the private sector, which would be necessary to capitalize on the new terms of trade by increasing exports and consumption of domestic goods.

PS: Gone with the wind – after gaining  10% in early trading after the hike, the ruble lost all gains before lunch.

# Scratching that itch from ifelse

Okay, as I wrote yesterday, ifelse is rather slow, at least compared to working in C++. As my current project is using ifelse rather a lot, i decided to write a small utility function. In the expectation that I will collect a number of similar functions, I made a package out of it and posted it on github: https://github.com/ojessen/ojUtils

I get a speedup of about 30 times, independent of the target type.

Feedback and corrections greatly appreciated.

Thanks to the people at Travis for providing a free CI server which works directly with github. This of course is a tiny example, but it is good to know that the workflow to set this up can be done in 5 minutes.

And thanks to Romain Fraoncois for showing some Rcpp sugar:

Some data:

require(ojUtils)
## Loading required package: ojUtils
require(microbenchmark)
## Loading required package: microbenchmark
test = sample(c(T,F), size = 1e5, T)
yes = runif(1e5)
no = runif(1e5)

microbenchmark(ifelse(test, yes, no), ifelseC(test, yes, no))
## Loading required package: Rcpp
## Unit: microseconds
##                    expr   min      lq  median      uq    max neval
##   ifelse(test, yes, no) 31925 33404.8 34065.1 58083.5  71891   100
##  ifelseC(test, yes, no)   620   647.5   721.8   817.7 209254   100
test = sample(c(T,F), size = 1e5, T)
yes = rep("a", 1e5)
no = rep("b", 1e5)

microbenchmark(ifelse(test, yes, no), ifelseC(test, yes, no))
## Unit: milliseconds
##                    expr    min     lq median     uq   max neval
##   ifelse(test, yes, no) 57.313 58.763 59.626 72.435 87.92   100
##  ifelseC(test, yes, no)  1.747  1.837  1.926  2.749 29.56   100
test = sample(c(T,F), size = 1e5, T)
yes = rep(1L, 1e5)
no = rep(2L, 1e5)

microbenchmark(ifelse(test, yes, no), ifelseC(test, yes, no))
## Unit: microseconds
##                    expr     min      lq  median      uq   max neval
##   ifelse(test, yes, no) 30747.6 31868.5 32274.8 32829.0 59412   100
##  ifelseC(test, yes, no)   453.7   548.9   581.5   646.2 27575   100
test = sample(c(T,F), size = 1e5, T)
yes = rep(T, 1e5)
no = rep(F, 1e5)

microbenchmark(ifelse(test, yes, no), ifelseC(test, yes, no))
## Unit: microseconds
##                    expr     min      lq  median      uq   max neval
##   ifelse(test, yes, no) 29331.2 31167.3 31719.7 32455.3 60589   100
##  ifelseC(test, yes, no)   460.1   537.1   566.8   640.7 27118   100