There was nothing in those models that would even hint catastrophe because models can only tell you what you hard-wire into them.
Models are based on assumptions
Hence they are disputable and, at the same time, provide an enormous margin of manoeuvre. And, when needed, impunity.
There are no universally accepted laws on building math models or rating schemes. Sure, you can dream up an equation and claim that it provides a basis for the pricing of some derivative. And then have people invest based on it. You cannot be held accountable simply because you are using an equation that one day implodes. You cannot take mathematics to court but you can put in prison an engineer or a doctor who is responsible for the loss of lives. Why is that? Because physics is not an opinion, while financial mathematics, together with its underlying assumptions, is. Just because you manipulate equations according to strict rules doesn’t mean you’re doing science. You could just as well be playing an extravagant video game which has no relevance or reflection in anything that is physical and that really exists. The fact that we are still unable to fix the mess, even though everything went off the rails almost ten years (and many trillions of dollars) ago, just goes to show how little we understand the economy, its systemic aspects and its dynamics.
We must change approach radically
When you face a super-complex system which you don’t understand – the crisis proves that we understand the economy very little – do you model very precisely a tiny subset thereof or do you try to get a global coarse picture of the situation? Isn’t it true that the closer you look the less you see?
The Principle of Incompatibility (L. Zadeh, UCAL) states that high precision is incompatible with high complexity. This means that the economy – which is evidently highly complex – cannot be modelled precisely and that all effort to squeeze decimals out of math models is futile, even though sometimes this gets you into the Nobel zone. In actual fact, the more complex models one is conceiving the more assumptions one must make. And that means more risk and, at the same time, more freedom to steer your model in a desired direction.
From a practical and physical standpoint, what is the difference between AAA and AA+? Is it correct (and ethical) to have over 20 rating classes?
So we need to change paradigm
Less hair-splitting, less fiddling with decimals and unlikely probability distributions or Brownian motion. Things have gotten very complex and we must place science not mathematic alchemy at the centre of our thinking.
The Probability of Default (PoD) of a company is the central concept behind a rating and ratings are a key link between the markets and investors. Their importance cannot be overstated. However, the PoD is not a physical quantity and there exist very many ways of computing it. Each method has its own assumptions – the degrees of freedom are phenomenal. Not only is a PoD a non-physical quantity, it is also highly subjective. In fact, rating agencies themselves claim that ratings are merely opinions. In mechanical engineering, for example, things like mass, strength, energy, stiffness or margin of safety are computed according to non-negotiable laws of physics which are the same all over the World. The PoD does not obey any such laws.
It may have become some sort of a standard but it is not the result of any law of physics. This means the PoD must be replaced by something more rational and relevant. Something that not only has its roots in physics, but which is also more in line with the turbulent character of our times.
Let’s not forget that ratings have been conceived a century ago
The world was very different then. Conventional business intelligence and analytics technology have become dangerously outdated and, most importantly, it is not well suited for a turbulent economy.
As the complexity of the economy increases, traditional analytics produces results of increasing irrelevance. Mathematically correct but irrelevant. Markets are not efficient. In nature there is no such thing as equilibrium.
So, beyond a PoD-based rating, we propose a complexity and resilience-based rating. High complexity is, with all likelihood, the most evident and dramatic characteristic of not just the economy but also the hallmark of our times. Resilience is the capacity to withstand extreme events and is a measurable physical quantity – for example there are standard tests in engineering to determine the resilience of materials – and resilience rating is applicable to companies, stocks, portfolios, funds, systems of companies or national economies. In our turbulent economy, which is fast, uncertain and highly interdependent, extreme and sudden events are becoming quite common. Such events will become more frequent and more intense, exposing fragile businesses to apparently unrelated events originating thousands of kilometres away. It is good to be resilient.
Independently of the bad reputation, conflicts of interest, law-suits, rating agencies will probably continue to flourish. What can be done at this point is to provide a mechanism which allows investors to check how ‘solid’ a given rating actually is. In other words, how trustworthy is a rating? This is how it can be done. Rating agencies typically use the fundamentals (Balance Sheet, Income Statement, Cash Flow, Ratios, etc.) to establish a rating based on a set of calculations. This is the ‘scientific part’ of the process. Then comes the subjective human component in the form of interviews with the management of a rated company, upon which analysts decide subjectively a given rating based on their experience, sensations, benchmarks, scorings, ets. The process is so subjective that two rating agencies will not always agree on a rating for the same company. Even two analysts in the same rating agency can disagree on a rating!
Ratings really are opinions, not science
However, the same fundamentals can be used to compute a resilience rating which does not involve humans in the loop. The results, as indicated in the figure below, can be generally split into four cases:
Conventional rating Resilience rating
The cases in which both ratings disagree are of course the most interesting. In particular the case in which a company is awarded an investment-grade rating and, at the same time, its resilience is low. In other words a great fragile rating.
ABOUT THE AUTHOR – Jacek Marczyk, author of nine books on uncertainty and complexity management, has developed in 2003 the Quantitative Complexity Theory (QCT), Quantitative Complexity Management (QCM) methodologies and a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. He introduced the Global Financial Complexity and Resilience Indices in 2013. Since 2015 he is Executive Chairman of Singapore-based Universal Ratings.