How to ‘fix’ standard deviations

Standard deviations are a popular and often useful measure of dispersion. To be sure, a standard deviation is merely the most likely deviation from the mean. It also doesn’t take into account the shape of the probability distribution function (this is done better using, for example, entropy, which is a more versatile measure of dispersion).

Standard deviations, however, may be ‘adjusted’ to take into account an interesting aspect of data, namely complexity. Let’s see an example. Say you have a portfolio of 28 stocks, all of which are independent (i.e. uncorrelated). In such a case the complexity map of the portfolio is as the one below.

One computes the standard deviation of each stock and may them use it to measure the volatility of the portfolio or other measures of risk. Suppose now that some of the stocks are indeed correlated. Say that the complexity map is now the one below.
Stocks 5 and 7, for example, are correlated with numerous other stocks, while 3, 6 and 25 are uncorrelated. This is reflected in the Portfolio Complexity Profile (or Portfolio Complexity Spectrum) which ranks the complexity footprint of each stock in the portfolio. This is illustrated below.
Stock 7 has a footprint of just over 17% while stock 5 is responsible for nearly 15% of the complexity of the portfolio.

Clearly, just like in the previous case, one can calculate the standard deviations of all stocks one by one. However, in the first case all stocks were uncorrelated, here some of them are. These two cases are obviously different, in particular from a structural point of view. The question now is this: why not use the information in the Complexity Profile to ‘adjust’ standard deviations by adding a correction originating from complexity? Clearly, a stock that is heavily correlated to other stocks in a portfolio could be more ‘dangerous’ than an uncorrelated one.  Evidently, it is the job of covariance to express this:

Covariance(i,j) = Correlation(i,j) x STD(i) x STD(j)
But why not take this into account also at standard deviation level? One simple way to accomplish this is the following:
Adjusted STD = (1 + Complexity contribution) x STD
Basically, stocks that increase portfolio complexity see their standard deviations corrected (increased) by a complexity-based factor. The (ranked) result is illustrated below.
The bar chart below shows the complexity-induced corrections of standard deviations:
For example, the standard deviation of the biggest complexity contributor – stock 7 – which is 3.81, is incremented by 17.1% (its complexity footprint) to yield a value of 4.46. The norm of the original covariance matrix is 58.21, while the ‘corrected’ covariance matrix has a norm of 68.15.

Portfolio complexity, which is a factor that is neglected while analyzing or designing a portfolio (a covariance matrix is a poor substitute) ‘increases’ standard deviations, illustrating eloquently the concept of complexity-induced risk.

Doing classical stats may produce overly optimistic results if complexity is neglected. In reality, every system has some degree of complexity, which is invisible to conventional analytics . In reality, there is often more risk than one may think.
Advertisements

How the Rating of Bitcoin Compares to that of Fiat Currencies

This analysis dates from December 2017. In the light of the recent downfalls of the Bitcoin one would need to read this blog with this in mind.

Fiat money has no intrinsic value – it is not backed up by the equal value of a commodity, but is made legal tender due to government decree. Bitcoin is a virtual cryptocurrency and worldwide payment system.

Bitcoin is a virtual cryptocurrency and worldwide payment system. It is the first decentralized digital currency – the system works without a central repository or single administrator – and has been introduced in 2009. Bitcon has been recently rated for the first time, receiving a very high, five-star Resistance to Shocks (RtS) Rating. But how does that compare to the RtS ratings of the major fiat currencies such as the British Pound, the Yen and the Euro? Let us see what happened over the past four years. All currencies are priced with respect to the US Dollar.

Bitcoin

Yen

Pound

Euro

As may be observed, the Bitcoin has a significantly less rugged, more stable and high rating than the other currencies. At the time of writing (December 10-th, 2017), the RtS ratings are as follows:

Bitcoin – 99.3% (five stars)

Pound – 78.8% (three stars)

Yen – 87.7% (four stars)

Euro – 87.4% (four stars)

The RtS rating of the pound is the result of the recent Brexit negotiations. In terms of RtS rating distribution, the following plots indicate how of the three fiat currencies the Yen has the highest most likely value of approximately 94% (four stars), while the Pound and Euro rating is around 89% and 87% (three star) respectively. The most likely RtS rating of the Bitcoin is 99% (five stars). The plots illustrating the distributions are shown below

Bitcoin

Yen

Pound

Euro

It is important to keep in mind that RtS ratings do not reflect the value of a particular currency – they merely convey the degree of chaoticity, or disorder, in the dynamics of its price variation.

Payment card industry in Vietnam – A systemic risks analysis

From a recent article by the Vietnam Chamber of Commerce and Industry:

Cash remains king in Vietnam but credit card issuers are predicting an imminent boom in the card market with the number of Vietnamese cardholders potentially growing by 10 times the current number of nearly one million.

Nguyen Thu Ha, chairperson of the Vietnam Card Association under the Vietnam Banking Association, said from both the macro-economic and banking perspective, the domestic card market is considered very strong given the rising incomes among the country’s 82-million people, rapid economic growth and improving legal system.

Increasing tourist arrivals and the influx of money remitted home by overseas Vietnamese would also facilitate credit card growth, Ha said at a conference touting the potential for electronic payments in Hanoi last week.

So, in theory things look great for the Vietnamese payment card industry. Let’s see if this is confirmed by a systemic Resistance to Shocks analysis. In other words, instead of analyzing, for example, a single bank issuing credit cards, we will analyze a total of forty banks as a system. The analysis has been performed using publicly available data. The data in question is the following (number of parameters is 35, all data is relative to 2016):
Total number of cards
Number of Domestic Debit cards
Number of International Debit cards
Number of Domestic Credit cards
Number of International Credit cards
Number of Domestic Prepaid cards
Number of International Prepaid cards
Number of Other Cards
Total Cards Revenue
Domestic Debit Card Revenue
International Debit Card Revenue
Domestic Credit Card Revenue
International Credit Card Revenue
Domestic Prepaid cards Revenue
International Prepaid cards Revenue
Other Cards Revenue
Total Card Payment Revenue
International Card payment revenue at card accepting units
International Card payment revenue at ATMs
Domestic Card payment revenue at card accepting units
Turnover of Cash Advances by Domestic Cards at POS
Domestic Card payment revenue at ATMs
Number of ATM until 31/12/2015
Number of ATM until 31/12/2016
Number of POS 31/12/2015
Number of POS until 31/12/2016
Cash Withdrawal
Cash Transfer
Revenue spending at the Card Accepting Unit
Contract Payment Revenue
Other Domestic Cards Revenue
International Card Payment Revenue at Card Accepting Units
Online Payment of International Cards at Card Accepting Units
Domestic Card Payment Revenue at Card Accepting Units
Online Payment of Domestic Cards at Card Accepting Units

The corresponding Complexity Map is illustrated below:

The analysis reveals a very high Resistance to Shocks (RtS), namely 95.6%, which corresponds to a five-star rating.  It is interesting to note that this situation hinges on the following parameters: number of ATM, cash withdrawals, number of international debit cards. Basically, the first four parameters are responsible for nearly 38% of the overall state of health of the system. Any policies aiming at improving or strengthening the payment card industry in Vietnam should target these parameters first.

The complete ranking of parameters in terms of how they impact the situation is reported in the chart below (values are in %).

It is also interesting to analyze the system of forty card issuing banks. The Complexity Map, based on the above data per bank, is illustrated below:

Again, the RtS rating is very high, a staggering 99% with a five-star rating. One must remember, however, that this is an analysis based on payment cards data alone. The curious thing is that the degree of interdependency of this system of forty banks is 87%. This is extremely high. If one looks at the map one realizes that it is very dense. What this comes down to is quite evident – every bank is correlated to almost every other bank. This is not good if the system is exposed to a shock as its effects would propagate very quickly throughout the entire network.
In terms of systemic risk, the banks which are situated at the top of the bar chart shown below are the ones that are most exposed (values in %).

The banks which are exposed the most when it comes to systemic risks are Eximbank, Maritime bank, Ocean bank, VPBANK and CBBank. In case of a shock, these banks will be most vulnerable. In fact, note that they are also the hubs of the Complexity Map, i.e. they have the largest number of inter-dependencies with the other banks.
Based on the 2016 revenue and systemic exposure of each bank, the footprint (weight) of each bank on the system is indicated below (VBANK has a value of 1 as it has the highest revenue and is taken as reference).

What is evident is that VPBANK is critical to the system (i.e. payment card industry). It has the highest revenue, which, combined with a high systemic exposure (number 3 in the previous ranking) turns it into a hub. Any actions, aiming at the improvement, or growth of this particular sector, should be targeted at the banks at the top of this chart.

Credit rating agency issues warning on climate change to cities

One of the largest credit rating agencies in the country is warning U.S. cities and states to prepare for the effects of climate change or risk being downgraded.

In a new report, Moody’s Investor Services Inc. explains how it assesses the credit risks to a city or state that’s being impacted by climate change — whether that impact be a short-term “climate shock” like a wildfire, hurricane or drought, or a longer-term “incremental climate trend” like rising sea levels or increased temperatures.

Ratings from agencies such Moody’s help determine interest rates on bonds for cities and states. The lower the rating, the greater the risk of default. That means cities or states with a low rating can expect to pay higher interest rates on bonds.

This puts a direct economic incentive [for communities] to take protective measures against climate change,” says Rachel Cleetus, the lead economist and climate policy manager at the Union of Concerned Scientists.

It can be difficult for a policymaker to justify a big investment when the associated benefits or risks seem a long way down the road. Moody’s announcement may change that.

Read complete article
Credit Rating Agency Issues Warning On Climate Change To Cities | NPR

Rating the Bitcoin – When new technologies meet

Bitcoin is a cryptocurrency and worldwide payment system. It is the first decentralized digital currency – the system works without a central repository or single administrator – and has been introduced in 2009. Unlike fiat money, Bitcoin is unique because it is de-centralized and, more importantly, not under the control of bankers or financial regulators. An argument often used by Bitcoin supporters calling the currency insulated to any kind of manipulation.
New Bitcoins are generated by a competitive and decentralized process called “mining”. This process involves that individuals are rewarded by the network for their services. Bitcoin miners process transactions and secure the network using specialized hardware and are collecting new bitcoins in exchange. Basically it is a high-tech exercise which means you need sufficient computational firepower.
Bitcoins, just like traditional currencies, are traded. Recently the value of Bitcoin has been increasing very rapidly and there is much excitement in the markets. There is also talk of a potential Bitcoin bubble. Recently, Bitcoin futures have been approved. Unlike futures exchanges for the regular markets, there are more than one settlement places for the Bitcoin futures. This brings some additional complexity to a crypto currency which is already complex itself.
Unlike fiat money, Bitcoin is unique because it is de-centralized and, more importantly, not under the control of bankers or financial regulators.
Given this (growing) complexity, and the emergence of new crypto-currencies, such as Ethereum, Ripple, Litecoin, or Monero, it is interesting to measure the complexity of the Bitcoin, as well as its rating. Obviously, we’re speaking of a Resistance to Shocks rating. Over the past few years, the price of Bitcoins has been increasing, notwithstanding de-stabilizing events such as the Ukraine crisis, the Brexit, the US elections, the Korean crisis, as well as scandals, tsunamis, or the fall of oil prices.
The price of the Bitcoin over the past 8 years is indicated in the plot below. It clearly shows a phenomenal acceleration over the past year.

The complexity of the dynamics of Bitcoin’s price (of Bitcoin, in other words) is shown in the next plot. Here we note something interesting: when complexity increases, the price goes down (this starts in 2013). When, complexity decreases, the price goes up again. This is clearly visible after 2015. At present, as Bitcoin is skyrocketing, its complexity is dipping.

The Resistance to Shocks Rating of Bitcoin is depicted in the last chart, below. The rating has a very high value most of the time, close to nearly 100%, which corresponds to a five-star rating. The minimum value of 80% – four-star rating – has been attained in 2016, however, it has risen rapidly to 90% and more. For the moment, things look pretty solid.

Given that, unlike corporations, currencies (and crypto currencies) react quickly, the RtS rating of Bitcoin is issued on a daily basis. The goal is to capture the dynamics of the rapidly changing economy. This is why the above plot is continuous.
The above analysis is unique. Bitcoin is a high-tech crypto currency. RtS ratings are provided by an equally high-tech rating robot. While conventional currencies can be manipulated, not to mention simply printed, Credit Rating Agencies are known for opaque rating practices not to mention conflict of interest. What this short article illustrates is how leading edge technologies can join forces in a context devoid of regulators, administrators, bankers and, most importantly, where manipulations take place on a daily basis.

Jacek Marczyk

Author of nine books on uncertainty and Complexity Management, Jacek has developed in 2003 the Quantitative Complexity Theory (QCT), a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. Read more publications by Jacek

Who rates ratings?

The economy is a dynamic system which is far too complex for us to understand. Human nature is extremely complex and billions of irrational humans form the economy. How can such a system ever be thought to be efficient, in equilibrium and stable, as many prominent economists have claimed? But this system, like every other natural or man-made system, must respect the non-negotiable laws of physics, even if they may be unknown at a given time.
One of the instruments that have enabled the 2008 crisis are sophisticated math models and ratings. There was nothing in those models that would even hint catastrophe because models can only tell you what you hard-wire into them. The construction of placebo-generating models has led to a Panglossian approach to finance and the economics which excludes extreme events and catastrophes, allowing bubbles to grow and Ponzi schemes to flourish. So failure was not contemplated in the models. And there are no model-building laws that would force one to do so.

There was nothing in those models that would even hint catastrophe because models can only tell you what you hard-wire into them.

Models are based on assumptions

Hence they are disputable and, at the same time, provide an enormous margin of manoeuvre. And, when needed, impunity.

There are no universally accepted laws on building math models or rating schemes. Sure, you can dream up an equation and claim that it provides a basis for the pricing of some derivative. And then have people invest based on it. You cannot be held accountable simply because you are using an equation that one day implodes. You cannot take mathematics to court but you can put in prison an engineer or a doctor who is responsible for the loss of lives. Why is that? Because physics is not an opinion, while financial mathematics, together with its underlying assumptions, is. Just because you manipulate equations according to strict rules doesn’t mean you’re doing science. You could just as well be playing an extravagant video game which has no relevance or reflection in anything that is physical and that really exists. The fact that we are still unable to fix the mess, even though everything went off the rails almost ten years (and many trillions of dollars) ago, just goes to show how little we understand the economy, its systemic aspects and its dynamics.

We must change approach radically

When you face a super-complex system which you don’t understand – the crisis proves that we understand the economy very little – do you model very precisely a tiny subset thereof or do you try to get a global coarse picture of the situation? Isn’t it true that the closer you look the less you see?

The Principle of Incompatibility (L. Zadeh, UCAL) states that high precision is incompatible with high complexity. This means that the economy – which is evidently highly complex – cannot be modelled precisely and that all effort to squeeze decimals out of math models is futile, even though sometimes this gets you into the Nobel zone. In actual fact, the more complex models one is conceiving the more assumptions one must make. And that means more risk and, at the same time, more freedom to steer your model in a desired direction.

From a practical and physical standpoint, what is the difference between AAA and AA+? Is it correct (and ethical) to have over 20 rating classes?

So we need to change paradigm

Less hair-splitting, less fiddling with decimals and unlikely probability distributions or Brownian motion. Things have gotten very complex and we must place science not mathematic alchemy at the centre of our thinking.

The Probability of Default (PoD) of a company is the central concept behind a rating and ratings are a key link between the markets and investors. Their importance cannot be overstated. However, the PoD is not a physical quantity and there exist very many ways of computing it. Each method has its own assumptions – the degrees of freedom are phenomenal. Not only is a PoD a non-physical quantity, it is also highly subjective. In fact, rating agencies themselves claim that ratings are merely opinions. In mechanical engineering, for example, things like mass, strength, energy, stiffness or margin of safety are computed according to non-negotiable laws of physics which are the same all over the World. The PoD does not obey any such laws.

It may have become some sort of a standard but it is not the result of any law of physics. This means the PoD must be replaced by something more rational and relevant.  Something that not only has its roots in physics, but which is also more in line with the turbulent character of our times.

Let’s not forget that ratings have been conceived a century ago

The world was very different then. Conventional business intelligence and analytics technology have become dangerously outdated and, most importantly, it is not well suited for a turbulent economy.

As the complexity of the economy increases, traditional analytics produces results of increasing irrelevance. Mathematically correct but irrelevant. Markets are not efficient. In nature there is no such thing as equilibrium.

So, beyond a PoD-based rating, we propose a complexity and resilience-based rating. High complexity is, with all likelihood, the most evident and dramatic characteristic of not just the economy but also the hallmark of our times. Resilience is the capacity to withstand extreme events and is a measurable physical quantity – for example there are standard tests in engineering to determine the resilience of materials – and resilience rating is applicable to companies, stocks, portfolios, funds, systems of companies or national economies. In our turbulent economy, which is fast, uncertain and highly interdependent, extreme and sudden events are becoming quite common. Such events will become more frequent and more intense, exposing fragile businesses to apparently unrelated events originating thousands of kilometres away. It is good to be resilient.

An impact test to measure the resilience (fragility) of a material

Independently of the bad reputation, conflicts of interest, law-suits, rating agencies will probably continue to flourish. What can be done at this point is to provide a mechanism which allows investors to check how ‘solid’ a given rating actually is. In other words, how trustworthy is a rating? This is how it can be done. Rating agencies typically use the fundamentals (Balance Sheet, Income Statement, Cash Flow, Ratios, etc.) to establish a rating based on a set of calculations. This is the ‘scientific part’ of the process. Then comes the subjective human component in the form of interviews with the management of a rated company, upon which analysts decide subjectively a given rating based on their experience, sensations, benchmarks, scorings, ets. The process is so subjective that two rating agencies will not always agree on a rating for the same company. Even two analysts in the same rating agency can disagree on a rating!

Ratings really are opinions, not science

However, the same fundamentals can be used to compute a resilience rating which does not involve humans in the loop. The results, as indicated in the figure below, can be generally split into four cases:

Conventional rating  Resilience rating

High                      High

Low                     Low

High                     Low

Low                     High

 

The cases in which both ratings disagree are of course the most interesting. In particular the case in which a company is awarded an investment-grade rating and, at the same time, its resilience is low. In other words a great fragile rating.

Before you invest wouldn’t you want to know? Considering the fact that rating agencies are unregulated, how about a second truly independent opinion?


ABOUT THE AUTHOR – Jacek Marczyk, author of nine books on uncertainty and complexity management, has developed in 2003 the Quantitative Complexity Theory (QCT), Quantitative Complexity Management (QCM) methodologies and a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. He introduced the Global Financial Complexity and Resilience Indices in 2013. Since 2015 he is Executive Chairman of Singapore-based Universal Ratings.