How to save democracy from itself

In the past the Earth was populated by numerous civilizations. The Greeks, the Incas, or the Romans are just a few egregious examples. Because the temporal and spatial correlation between those civilizations was often non-existent, or very limited, if one happened to disappear or extinguish itself, others remained or new ones simply emerged, sometimes independently of each other. Today, even though there still persist ideological and religious gradients between countries and regions, because of the immense interdependency and complexity that characterizes our times, the Earth, to all effect, is populated by one single globalized society. Because of this interdependency we are all on the same boat. If our global society fails, that’s it. There is no other to replace it.

Any form of progress or growth is accompanied by an inevitable increase in complexity. However, this is true only until the so-called critical complexity is reached. Critical complexity is a sort of a physiological chaos-rich and fragile state, or limit, beyond which no system can evolve, no matter what. In order to continue evolving beyond critical complexity, a civilization must find ways of overcoming the delicate and inevitable vulnerability in which self-inflicted destruction appears to be the most probable form of demise. Our annual analyses of data published by the World Bank, which includes hundreds of thousands of variables, indicate that the world will approach its own critical complexity around 2055-2060, amidst high fragility.

The biggest threat of our civilization is entropy – disorder –which corrodes society and its structure making it fragile.

It is painful to watch as our Western civilisation dissolves itself because of rampant decadent individualism, mindless political correctness, the spread of junk culture, reckless immigration policies, abdication of responsibility, lack of moral and intellectual discipline and relativism. We are liquefying, to cite Zygmunt Bauman, the structure of our society which took millennia to form. Destruction of the family is seen as progress. Patriotism is seen as racism. Depravation, obsessive devotion to money, pornography, the celebration of ugliness and dysfunction and mindless consumption are becoming the new mainstream values, the new normal. All this is simply insane. It must stop.

Our biggest threat is entropy – the production disorder/waste – which inevitably accompanies any form of activity, including progress and growth as well as conflicts or cultural regression. Waste, in this context is not just refuse, it includes moral and intellectual waste, which corrodes society and its structure. Since the second half of the 1990s the rate of entropy and disorder production in the world has suddenly doubled. For every step of progress our global society now produces double the entropy it produced in the previous 25 years. At such rate, the garbage bin will be full in the next 30 to 40 years. Where will we dump our entropy and chaos then?

Governing a Liquid Society - How to Save Democracy From ItselfDemocracy is a formidable and efficient entropy-producing regime. While it is seen as a conquest of humanity, democracy is, at the same time, protecting the endogenous mechanisms of self-annihilation that it spawns and then tolerates. Western societies, where democracy is deeply rooted, are very fragile. This is particularly true of Europe which is exposed to external and internal threats that compound its high innate fragility. Because of a temerarious immigration policy in times of economic uncertainty, fragmentation and fragility, Europe is invaded by millions of ‘immigrants’, that often import crime, the jihad and diseases, and who’s cultural contribution to an already immensely culturally diverse and rich continent is inexistent.

How can democracy be saved from itself in the three to four decades left before the world reaches its critical complexity? The book proposes to institute an entropy footprint, a sort of rating, for individual citizens, corporations, cities and countries, and to reward and tax them based upon the amount of disorder that is dumped by each into the system. In particular, the citizens rating system will allow to break the gridlock in which democracy finds itself today because of the one man one vote system, by rewarding ‘low-entropy citizens’ with the right to cast more votes during elections. The expected result of such skewing of the current system is that it will drive society to a state of lower entropy, hence greater resilience. The greatest crime of all is trying to make equal things that are not. If we really want to make a better society, we must reward those that are better than others. Under normal circumstances of ‘equilibrium’ the one man one vote paradigm is probably the obvious choice. However, when science identifies and recognizes an emergency of the magnitude and gravity which we expose in this book, it becomes necessary to adopt a solution which, in this particular case, the same science suggests.

The Quantitative Complexity Theory (QCT), the first ever theory of complexity that actually proposes a rational measure of complexity of all physical systems, lies at the basis of the proposals made in this book. Science is not about talking or dreaming about things, it is about measuring and ranking them. The QCT has been tested and applied for over a decade in countless projects, experiments and applications with tens of clients and institutions from four continents. Applications range from medicine to manufacturing, from economics, finance to defence and business intelligence. In 2018 a Complexity Chip has been developed for real-time monitoring of complexity of anything that is mission-critical, from military equipment to software on an automobile or to patients in intensive care. The QCT shows how our world is becoming increasingly complex and how there are physical limits as to how complex it can get.  It also hints a solution. If we don’t act, around 2055-2060 our global family will not be Too Big To Fail. It will be Too Complex To Survive.

The book is available here.

Security and Resilience – Guidelines for complexity assessment process

According to ISO, “This document gives guidelines for the application of principles and a process for a complexity assessment of an organization’s systems to improve security and resilience. A complexity assessment process allows an organization to identify potential hidden vulnerabilities of its system and to provide an early indication of risk resulting from complexity.“

The ISO 22375 originates from the UNI 11613 published in 2015 and impulsed by Ontonix. Ontonix is principal co-author of UNI 11613.

Complexity-induced risk is today the most insidious form of risk

“We are pleased to have contributed to the ISO 22375” said Dr. J. Marczyk, the founder and President of Ontonix. “Complexity-induced risk is a new form of risk, introduced by Ontonix and the management of which Ontonix has pioneered since its founding in 2005. Complexity-induced risk is today the most insidious form of risk”, he added. “We do, however, have reservations as to ISO 22375.

First of all, it provides a subjective assessment in that it is based on arbitrarily assigned weights. Second, the analysis procedure has a stong linear flavour and discounts the presence of critical complexity. This last fact indicates that the standard leans heavily towards a qualitative analysis, neglecting such fundamental principles of physics as the Second Law of Thermodynamics. Finally, the standard speaks of resilience but no measure of resilience is proposed or discussed”, he concluded.

Reinventing Anti-Money Laundering with complexity – Part 2

This is part 2 of the Reinventing Anti-Money Laundering with complexity analysis. Read part 1 here.

Madoff: The Aftermath

In 2014 Forbes magazine reported that JPM, “where Madoff kept the bank account at the centre of his fraud”, would pay a settlement of $1.7 billion. This resolved any potential criminal case against the bank arising from the Madoff scandal. JPM entered into a deferred prosecution agreement with federal prosecutors to resolve two felony charges of violating the Bank Secrecy Act.

The bank admitted to failing to file a “Suspicious Activity Report” after red flags about Madoff were raised, which, prosecutors alleged, did not have adequate anti-money laundering compliance procedures in place

JPM: The Sins of the Deposit

The vast majority of Madoff’s accounts were deposited with JP Morgan Bank for two decades. During the nineties, according to prosecutors, JPM Bank employees had raised concerns about Madoff’s consistent market-beating returns. One arm of JPM even pulled out of a deal with Madoff’s firm in 1998 after “too many red flags” were raised to proceed. By Autumn 2008, JPM had itself redeemed a $200 million investment from Madoff’s firm, without notifying clients or authorities. In January 2007 and July 2008, transfers from Madoff’s accounts triggered JPM’s AML software, but JPM failed to file a Suspicious Activity Report (SAR). In October 2008, a U.K. subsidiary of JPM filed a report with the Serious Organised Crime Agency.

Even with a rush of new investors who believed Madoff was one of the few funds that was still doing well, it still wasn’t enough to keep up with the avalanche of withdrawals.

Meanwhile JPM, as the depositing bank, should have been able to identify the volumes of money in; and money out, and that money deposited was not being paid to any investment account. It should also have identified the shortfalls in net flows much sooner. As the credit crisis intensified, investors tried to withdraw $7 billion from the firm. However instead of investing deposits, Madoff had simply deposited his clients’ money into his business account at Chase Manhattan Bank (part of JPM), and paid customers out of that account when they requested withdrawals.

To pay off those investors, Madoff needed new money from other investors. However, in November, the balance in the account dropped to dangerously low levels. Only $300 million in new money had come in, but customers had withdrawn $320 million. He had just barely enough in the account to meet his redemption payroll on November 19. Even with a rush of new investors who believed Madoff was one of the few funds that was still doing well, it still wasn’t enough to keep up with the avalanche of withdrawals.

JPM Bank, which at one point in 2008 had well over $5 billion, was now down to only $234 million. With banks having all but stopped lending to anyone, Madoff knew he could not borrow enough to cover outstanding redemption requests. He instructed the remaining balance to be paid out to relatives and selected investors.

The failings of JPM’s AML were therefore;

  • High complexity is a great way to hide incompetence, inefficiency, fraud and makes it difficult to identify responsibilities
  • Failing to identify the anomaly and malignant purpose of the Madoff business account
  • Not identifying excessive deposits that were not moved into an investment account
    Not intervening as redemptions accelerated
  • Failing to identify patterns between deposits and withdrawals
  • Not tagging or identifying the original source or payment chain
  • Human error in not actioning flags from the Anti-Money Laundering software
  • Failing to escalate a SAR to the authorities
The Payments system has been progressively improved to protect the system against and undermine the money laundering of proceeds from crime and terrorism
The Payments system has been progressively improved to protect the system against and undermine the money laundering of proceeds from crime and terrorism.

The Payments system has been progressively improved to protect the system against and undermine the money laundering of proceeds from crime and terrorism. It has not been designed foremost to identify fraud. AML systems thus still struggle to identify anomalies (fraud) perpetuated by existing customers and ‘upstanding’ tax payers moving large assets between domestic accounts. This need not be immutable.

An ‘A-AMLS’ solution could take a variety of forms but which?

  • Geotagging money paid-in, paid-out and the source and beneficiary of funds
  • Automated Suspicious Activity Reporting to remove human negligence
  • Codifying every deposit and settlement for intent and behaviour
  • Measure sequencing risk of outflows to inflows to detect ‘burn rate’
  • Rules-based modelling on a set of assumptionsMachine-learning based on past Ponzi scheme behaviour to identify characteristics of future fraud to enable more sophisticated triggers for raising a SAR
  • Redesigning Delivery Versus Payment (DVP) in asset management with BlockChain
  • Anomaly-identification based on complexity observation
Applying Complexity to detect Anomalies rather than Modelling or Machine-Learning

The Principle of Incompatibility by L. Zadeh assumes if looking for a ‘small’ anomaly in a highly complex system it will probably be never detected because you cannot squeeze precision out of something that just doesn’t have it. This means that small anomalies may slowly cause major issues or losses in the long run. This can make such schemes difficult to spot. As AML systems have become more sophisticated through modelling and machine learning; they remain fragile to this principle. Renowned Dr Jacek Marczyk, inventor of model-free Quantitative Complexity Management (QCM) notes;

Building (complex) models of something that is already complex is a highly subjective and risky exercise

“Complexity doesn’t need to be modelled – it can be measured based on raw data. Models are based on assumptions, which are prone to error. Building (complex) models of something that is already complex is a highly subjective and risky exercise. Meanwhile, a machine learning system must see a given anomaly a sufficient number of times in order to learn to recognize it. In most cases, however, one cannot afford the luxury of multiple failures in order to learn to recognize an anomaly!” J. Marczyk.

Models are based on assumptions. Every time a model is used one would need to check if the said assumptions are indeed satisfied. How many people actually do that? Models need to be updated and maintained, a very costly exercise. In models certain factors are necessarily – because of computational cost or lack of data – neglected. Well, experience suggests that the most important things in a model are those it doesn’t contain.

Similarly it is attractive to use machine learning such as Automated Neural Network (self-learning) tools to detect anomalies. However, in order to recognise an anomaly, your machine learning system must see a given anomaly a sufficient number of times in order to learn to recognise it. In most cases, however, one cannot afford the luxury of multiple failures in order to learn to recognise an anomaly! So, what can one do? How can one detect that something ‘wrong’ is happening? For example, how can a Ponzi scheme be detected in a vast universe of money transfer transactions?

For example, how can a Ponzi scheme be detected in a vast universe of money transfer transactions
For example, how can a Ponzi scheme be detected in a vast universe of money transfer transactions?

In a model-free approach to complexity quantification, all that is needed is raw data spanning a particular period of time. In case of financial transactions it can be minutes, days or months. From that we can create a QCM framework.

Starting Framework for a Complexity Anomaly-based AML system (AAMLS) being;

  • Anomaly detection requires two things; 1) define what an anomaly is and 2) in non-stationary systems – the anomalies themselves will also change
  • The total numbers of payers and payees creates a network, producing observable data
  • Complexity is a function of that structure
  • High complexity implies fragility
  • Total deposits in; and withdrawals out, are complex across the network both in terms of time, different amounts, different payees and payers
  • The resilience of the account is a function of its burn rate, the deficit/surplus of net flows across the network
  • Highly complex systems behave in a myriad of ways – called modes – and can switch from one mode to another without warning
  • Deposit and withdrawal behaviour in isolation may appear normal
  • The behaviour of the account might alert a possible fraud or Ponzi
  • The identify of the payer and payer might not alert suspicion

Therefore any given business (or account) is based on a series of processes that possess structure (i.e the way information flows in the system). This structure is reflected in the account ledger. Typically we’re looking at loss of structure due to ‘de-correlation’ between the entries on the account ledger. If the person committing fraud it changes the arithmetic then it will show up immediately as a sudden change in complexity.

Then by applying Quantitative Complexity Management;

  1. Essentially, one structure is (implicitly) mapped onto another. The creation of accounts and ledgers are done according to sets of rules, such as the International Accounting Standards (IAS).
  2. The structure is reflected in the so-called Complexity Map which shows the interdependencies between its entries.
  3. When the underlying model is manipulated with fraudulent intent, so does the topology of its Complexity Map.

A growing Ponzi scheme will present itself in the form of an upward drift of complexity with a gradient proportional to the rate at which the scheme itself expands. For conventional AML systems the anomalies must be of sufficient magnitude in order to rise above the ‘noise floor’ of a given system or process. In QCM, disguising a Ponzi scheme with numerous small transactions will not be sufficient to hide it because the emergence of its structure will be inevitable. This is because the nice feature of the QCM algorithm is that it is scale-independent, which means that the magnitude of the transactions doesn’t affect complexity.


The Payments system has to accept its role for Ponzi schemes; a parasitic issue of fraudulent behaviour, often carried out behind the auspices of legal activity on home soil. Today’s AML systems remain woefully misguided and ill-equipped. Reliance on modelling and machine-learning incurs fragility. However when a Ponzi scheme is instituted within the bank’s universe it will alter its structure, in particular it will add structure, hence increase complexity. As J. Marczyk puts it;

In a totally different context, QCM has been show to differentiated with an extremely high degree of success between counterfeit and genuine electronic components, such as chips

“Complexity is a function of structure (as well as entropy but we will leave entropy out of the picture). When a given system undergoes some sort of mutation – in physics we could speak of a phase change, for example from liquid to solid – its structure changes. When this happens, complexity undergoes changes, that may be sudden or gradual. When these changes are gradual they offer great crisis anticipation signals. This has been observed in medicine, whereby hospitalized patients in intensive care, monitored via a series of clinical parameters and biomarkers, showed rapid complexity increases prior to an instability or side effects of drugs before conventional signals hinted anything anomalous. In a totally different context, QCM has been show to differentiated with an extremely high degree of success between counterfeit and genuine electronic components, such as chips.”

Complexity modelling changes the focus of your AML system from client to anomaly identification, optimising payments analysis, automating the SAR process and removing human error. The prize, an end to fraudulent Ponzi schemes being obscured by the ever faithful depositing account. We look forward to moving to proof of concept by working with progressive Banks.

Reinventing Anti-Money Laundering with complexity – Part 1

Foreword by Jacek Marczyk
“Our society depends on a number of highly complex and highly interconnected networks that process and distribute energy, information, goods, etc. They form an immense system of systems that has one key characteristic – huge complexity. As this system of systems evolves and grows its complexity also increases. And this is a problem. First of all, high complexity implies fragility. This is because highly complex systems can behave in a myriad of ways – called modes – and can often switch from one mode to another without any early warning. Many of these modes of functioning can be counterintuitive. In each mode a complex system offers different ‘concentrations of fragility’, points of weakness which may open the door to an attack. The more complex a system is the more concentrations of fragility it possesses in each mode. Think of all the things that go wrong in a modern car with its sophisticated electronics.

More gadgets means more trouble. Why? Because gadgets interact with each other, even though this if often unintended. More gadgets more possible interactions. The tens of thousands of possible circumstances that can arise is impossible to test. The only way to proceed is on a trial and error basis and let customers debug a product.

Anomaly detection has become a popular subject nowadays. The detection of malfunctions or anything suspicious, such as hacker attacks or illicit operations of any sort, is obviously of great interest. There are obviously various types of anomalies or malfunctions. Certain attacks may be undetected for prolonged periods of time, until the damage become visible. Some attacks are immediately obvious, as is the case of blackouts. Then there exist anomalies that are permanent but are never discovered. They make systems less efficient, less profitable, but, very often due to high complexity they remain masked. Monitoring of this universe can performed periodically – even in real-time if sufficient computational power is available – to track its complexity over time. The above logic can be applied to a universe of financial transactions to detect anomalies, in particular Ponzi schemes.”

Introduction: Anomaly-based Anti-Money Laundering Software (A-AMLS) with QCM

In finance, Complexity can become inextricably linked to the rise of fraud. Complexity is the reciprocal of transparency. In a simple transparent system it is difficult to disguise fraudulent anomalies. The system remains resilient. However the number of financial products and product complexity is increasing; some are made deliberately complex, others escape their creator’s control by refusing to follow a Gaussian distribution here or a linear correlation there.

The presence of highly complex products increases the complexity of financial markets, offering new opportunities for fraudulent and illicit operations. High complexity is a great way to hide incompetence, inefficiency, fraud and makes it difficult to identify responsibilities. Ponzi schemes (a form of fraud and money laundering) like Bernie Madoff were born out of investor desire to apply Complexity to escape market volatility. Likewise the incompetence of the custodians who accept deposits, monitor assets and pay on maturity must be addressed.

Could the use of Complexity for Bank Deposit Anti-Money Laundering (AML) software stop the next Madoff fraud?

A complete rethink of Anti Money Laundering (AML) software is needed; both in how it is structured and actioned. Banks can be negligent through a combination of human error or poor AML systems, as was the case with Madoff Securities and JP Morgan (JPM). Quantitative Complexity Management (QCM) offers a solution. Without which Banks can be unwitting accomplices to fraud; yet with QCM they can become safeguards in the system.

Could the use of Complexity for Bank Deposit Anti-Money Laundering (AML) software stop the next Madoff fraud? How can a Ponzi scheme be detected in a universe of money transfer transactions? Complexity monitoring of such universes may be of help. Fraudulent attacks may be undetected for prolonged periods of time, until the damage become visible.

money laundering
In the world of possible applications of Paytech; stopping criminality and protecting customers must surely rank right up there but is it possible?

In the world of possible applications of Paytech; stopping criminality and protecting customers must surely rank right up there but is it possible? If judged against the full complexities of the Madoff case then the answer might sadly be no. Confined to the payment system, possibly. Reviewing the Madoff case, the basics of AML systems, and AML failings all help to provide a framework for applying QCM.

Starting assumptions for any AML system;

  • Prevent the placement of assets from criminal, terrorist or sanctioned sources
  • Prevent the layering of assets, from prohibited sources, into deposit and investment
  • Prevent the integration of prohibited assets being paid back
  • AML is designed around client identification controls
  • AML relies on humans monitoring and escalating
  • AML identifies anomalous payments based on past customer behaviour
What is a Ponzi scheme

A Ponzi scheme can be defined as a structure that attracts cash deposits on the prospect of future returns but actually pays out existing investors using the deposits from new investors. These schemes display high cash burn rates but can exist for decades as long as investors are content and redemptions are covered. Ponzi schemes have dogged the Finance industry as long as they have existed. They cast a long shadow and perhaps none more so than Bernie Madoff.

The Psychology of Ethics in the Finance and Investment Industry, CFA Institute Research Foundation Publications (June 2007) available free at:

Why did it happen? I could cite you Orwell, Nietzsche or even Confucius; or any other commentator of the human condition. It is an ethical question but also a system based problem. The Chartered Financial Analyst Institute consider the ethical aspect here.

It is more useful to consider how? Eradicating the causality of Ponzi schemes is a rubicon that regulators have failed to succeed. However for all the complexity of a Ponzi scheme; they are often perpetuated through an unassuming bank account. Payments in; payments out. Yet over time and thousands of accounts this is itself becomes a complex network.

Bernie Madoff
On March 12, 2009, Madoff pleaded guilty to 11 federal crimes and admitted to operating the largest private Ponzi scheme in history.
About Madoff

For those who don’t recall, this case involved the largest known Ponzi scheme in history; $65 billion defrauded out of $177 billion. All deposited into a reputable bank. The fact that the collapse of the Ponzi was the consequent of the Great Financial Crisis only made the losses all the more painful. Something in the American psyche broke.

Madoff pleaded guilty to 11 federal crimes and admitted to operating the largest private Ponzi scheme in history

On March 12, 2009, Madoff pleaded guilty to 11 federal crimes and admitted to operating the largest private Ponzi scheme in history. In his guilty plea, Madoff admitted that he hadn’t actually traded since the early 1990s, and all of his returns since then had been fabricated. The New York Post reported that Madoff “worked the so-called ‘Jewish circuit’ of well-heeled Jews he met at country clubs on Long Island and in Palm Beach. Over the years many accusations and investigations were initiated against Madoff over the years, both externally and internally at the depositing Bank, but none led to action.

In 2000 Harry Markopolos alerted the SEC. His analysis concluded almost immediately that Madoff’s numbers didn’t add up. After four hours of trying and failing to replicate Madoff’s returns, Markopolos concluded Madoff was a fraud. He told the SEC that based on his analysis of Madoff’s returns, it was mathematically impossible for Madoff to deliver them using his claimed strategies. Either Madoff was front running his order flow, or his wealth management business was a massive Ponzi scheme. The culmination of Markopolos’ analysis in his third submission, a detailed 17-page memo entitled ‘The World’s Largest Hedge Fund is a Fraud’ specified 30 numbered red flags based on just over 14 years of Madoff’s trades.

He approached The Wall Street Journal in 2005, but WSJ editors decided not to pursue the story.

Watch Harry Markopolos CFA testimony on Madoff here:

Alternative data, investment strategies and the principle of incompatibility

Alternative data – a trendy subject nowadays – is non-traditional data that can be used in the investment process. An increased percentage of hedge fund managers are planning to use this kind of data and new analytics in their investment processes. Examples of alternative data are:
  • Social media
  • Sentiment
  • Web crawls
  • Satellite & weather
  • Consumer credit
  • Internet of Things
  • Mobile App usage
  • Advertising
  • Store locations
  • Employment
The amount of such data may be huge. The logic behind adopting alternative data is:“Large amounts of data lead to more available information for better analysis”

This statement is not necessarily always true. Let’s see why. Before you start to solve a problem (a numerical problem) you should check its conditioning. An ill-conditioned problem will produce very fragile and unreliable results – no matter how elegant and sophisticated solution you come up with it may be irrelevant or simply wrong.

Simple and basic problem

Consider a simple and basic problem, a linear system of equations: y = A x + b. If A is ill-conditioned, the solution will be very sensitive to entries in both b and y and errors therein will be multiplied by the so-called condition number of A, i.e. k(A). That’s as far as simple linear algebra goes. However, most problems in life cannot be tackled via a linear matrix equation (or any other type of equations for that matter). This does not mean, though, that they cannot be ill-conditioned, quite the contrary.

Most problems in life cannot be tackled via a linear matrix equation (or any other type of equations for that matter)The numerical conditioning of a problem should always be computed before one can attempt its solution. How often is this done? Very very rarely. Once you’ve determined that a problem is well-conditioned, there is the issue of determining if it will allow a solution, multiple solutions or none. Those who practice math on a daily basis know this well. Nothing new under the Sun. However, if you collect huge amounts of data and you don’t check its numerical conditioning before you start to work on it, you may be playing an extravagant video game.

There is one fundamental issue, in our view, which makes huge problems/huge data sets difficult to solve – high complexity. Close to critical complexity – each system has such a threshold – means the problem (or system) is very ill-conditioned and dominated by uncertainty (i.e. is chaotic). Imagine the linear system of equations y = A x + b in which the entries of A are not crisp values but fuzzy. In other words, suppose that a particular entry aij assumes values from a certain range and that the “exact” value is unknown. This changes the situation dramatically as the system can lead to a huge number of solutions.

Chaotic soup of numbers

Suppose, now, that a huge set of alternative data has been collected. How can one determine if this data is of value, i.e. if it contains structure and useful rules or if it is just a chaotic soup of numbers? This can be done easily by measuring the data set’s complexity and corresponding critical complexity. Their ratio is a good proxy of numerical conditioning (i.e. k(A)). Very simple examples of what we’re talking of are shown in the figure below.

The case on the left hand side corresponds to a low-complexity high-correlation situation whereby one may extract a crisp and useful rule (i.e. ‘if X increases then Y increases’). On the other extreme data is uncorrelated and no rule may be extracted.When high complexity kicks in there is no such thing as accuracy or precisionSo, the complexity/critical complexity ratio for a data set is a sort of data set rating – a low value points to data which can deliver useful information, while values close to 1 reveal a situation dominated by chaos and noise.

One last point. Data and the corresponding analyses must be relevant, not accurate. When high complexity kicks in there is no such thing as accuracy or precision. The Principle of Incompatibility, coined by L. Zadeh states that ‘high complexity is incompatible with high precision’. In other words, when complexity is high, ‘precise statements lose relevance’ and ‘relevant statements lose precision’.


Precise statement that is irrelevant: the probability of default of a given corporation in the next 3 years is 0.025%.

Relevant statement that is not precise: there is a high probability that it may rain tomorrow.

Alternative data, or Big Data, can be very complex. How complex? Well, you need to measure it, but suppose that indeed its complexity is high. In such circumstances don’t delude yourself – the information you extract from it will not be precise. Adding more data is not synonymous to adding more information.

The bottom line: every set of Alternative Data (or Big Data) should have a complexity-based conditioning, or rating, attached to it. If this rating is low, you’ll never extract useful information from it, no matter what method you use. Handle with care.

How to ‘fix’ standard deviations

Standard deviations are a popular and often useful measure of dispersion. To be sure, a standard deviation is merely the most likely deviation from the mean. It also doesn’t take into account the shape of the probability distribution function (this is done better using, for example, entropy, which is a more versatile measure of dispersion).

Standard deviations, however, may be ‘adjusted’ to take into account an interesting aspect of data, namely complexity. Let’s see an example. Say you have a portfolio of 28 stocks, all of which are independent (i.e. uncorrelated). In such a case the complexity map of the portfolio is as the one below.

One computes the standard deviation of each stock and may them use it to measure the volatility of the portfolio or other measures of risk. Suppose now that some of the stocks are indeed correlated. Say that the complexity map is now the one below.
Stocks 5 and 7, for example, are correlated with numerous other stocks, while 3, 6 and 25 are uncorrelated. This is reflected in the Portfolio Complexity Profile (or Portfolio Complexity Spectrum) which ranks the complexity footprint of each stock in the portfolio. This is illustrated below.
Stock 7 has a footprint of just over 17% while stock 5 is responsible for nearly 15% of the complexity of the portfolio.

Clearly, just like in the previous case, one can calculate the standard deviations of all stocks one by one. However, in the first case all stocks were uncorrelated, here some of them are. These two cases are obviously different, in particular from a structural point of view. The question now is this: why not use the information in the Complexity Profile to ‘adjust’ standard deviations by adding a correction originating from complexity? Clearly, a stock that is heavily correlated to other stocks in a portfolio could be more ‘dangerous’ than an uncorrelated one.  Evidently, it is the job of covariance to express this:

Covariance(i,j) = Correlation(i,j) x STD(i) x STD(j)
But why not take this into account also at standard deviation level? One simple way to accomplish this is the following:
Adjusted STD = (1 + Complexity contribution) x STD
Basically, stocks that increase portfolio complexity see their standard deviations corrected (increased) by a complexity-based factor. The (ranked) result is illustrated below.
The bar chart below shows the complexity-induced corrections of standard deviations:
For example, the standard deviation of the biggest complexity contributor – stock 7 – which is 3.81, is incremented by 17.1% (its complexity footprint) to yield a value of 4.46. The norm of the original covariance matrix is 58.21, while the ‘corrected’ covariance matrix has a norm of 68.15.

Portfolio complexity, which is a factor that is neglected while analyzing or designing a portfolio (a covariance matrix is a poor substitute) ‘increases’ standard deviations, illustrating eloquently the concept of complexity-induced risk.

Doing classical stats may produce overly optimistic results if complexity is neglected. In reality, every system has some degree of complexity, which is invisible to conventional analytics . In reality, there is often more risk than one may think.

Car electronics: how much more complexity can we handle?

As we know, excessive complexity is a formidable source of fragility. If you want to make something fragile, make it very complex. Problems are guaranteed. This is because a highly complex system may behave according to many ‘modes’ (in non-linear mechanics these are called ‘attractors’).

High complexity means that under certain circumstances a system may jump from one mode to another without any warning. Sometimes one such mode of functioning is called ‘fault’. A most unpleasant property. Especially because high complexity can also mask the cause or the multiple causes. In fact, when a modern car has a problem with its electronics, parts of the system which may be the cause are simply replaced a nobody fixes anything. Sometimes, the cause of the problem is unknown and is never discovered.

Highly complex systems

In highly complex systems, malfunctioning or even bad design may remain invisible for a long time. In highly complex systems the crucial variables are often discovered by accident. Highly complex systems cannot be designed without taking complexity into account. Sounds obvious but today, in engineering design complexity is not considered as an attribute of a system, as a variable to account for when designing the system’s architecture. An example is the electronics of a modern car.

An article, of which we quote a passage, explains the situation, showing some very interesting figures:

“Increasingly complex gadgets in cars may be causing a rise in expensive faults and breakdowns”, figures suggest.

Warranty Direct, which analysed data from 50,000 policies for cars aged three years or older over a five year period, found that the number of electrical faults rose from about 5,300 in 2008 to 11,500 in 2013.

The figures suggest that increasingly complex electronic systems are also costing a growing amount to repair, with the average cost for fixing a fault rising from £221 to £291 during the same period.

In premium cars, the costs were even higher, with the average electrical repair costing £670 in a Bentley and £757 in a Porsche. In contrast, the average repair on a Suzuki cost just £244.

Although standard mechanical components such as relays and alternators are still the most likely items to fail, the figures show that more modern technology such as parking sensors are now also among the most common causes for complaint.

Specialised equipment is often needed to diagnose and fix electrical problems, while in some of the newest models only franchised dealers are able to access systems for repair, adding to the cost of repairs.

A the time the above article was written, Subaru was the most reliable manufacturer overall, with just one in seven of its cars developing a problem each year compared with more than one in three Renaults.” The two charts shown below, and taken from the above article, are very eloquent.

Excessive complexity in modern vehicles

The issue of complexity-induced problems in modern cars may be approached using Quantitative Complexity Management (QCM) technology. First of all, the problem must be diagnosed. The simplest way to do it is to embed our QCM engine OntoNet in the ‘CPU’ of the car, and analyze in real-time the data which the various modules (sub-systems) exchange between one another. Such data may be taken directly from the Controller Area Network (CAN) Bus.

An example of how electronics sub-systems interact in a modern car is illustrated below, where the so-called Complexity Map is shown. The size of the square nodes is proportional to the contribution of complexity – the large nodes add more complexity than the small ones. NB the topology of the map is not constant, it changes over time, as different systems are engaged or switched of.

The image shows also two interesting figures:

A global complexity measure – this is the current value of complexity (in this case it is 9.51 cbits)
Critical complexity (in this case it is 11.11 cbits) – this is the maximum complexity a given system can sustain.

Critical complexity is the maximum amount of complexity a given system can handle before is starts to lose functionality and its behavior becomes uncontrollable. If a system functions close to its critical complexity for a long time, faults are more likely. Such a system should be redesigned. In such cases, complexity should be one of the design attributes to keep under strict observation.

Each of the nodes in the above map is a sub-system, which means that it is composed of a certain number of parts, which are represented symbolically as variables (channels) in the Complexity Map illustrated below. For each system it is possible to measure its complexity and critical complexity. All it takes is raw data from the CAN Bus.

The interesting thing that becomes immediately apparent is the huge number of interactions which exist between the various sub-systems. Often, such interactions are defined by design, sometimes they exists because of electromagnetic interference, corrosion, etc. The simple fact is this: a few tens of components can potentially develop thousands of interactions. Does anyone ever check them all?

The instantaneous contribution of each sub-system to the overall complexity of the entire electronics system is represented by the following chart, called the Complexity Profile:

The sub-systems at the top of the chart are those that in a give period of time are the main contributors to overall complexity. If, in that time frame there is a problem, this is where one should look first in order to identify the cause. Think of the Complexity Profile as a graphic equalizer on your stereo. It changes dynamically over time and provides a precious breakdown of the problem into components.

The cited article closes with the following paragraphs:

“David Gerrans, Warranty Direct managing director, said: “As automotive technology continues to advance, cars get more and more complex. Nowhere is that more so than in the field of computer technology and other electronics.

“But while these advances can undoubtedly improve the performance and safety of cars, they also have a knock-on effect on how often they fail and how much it costs to repair them.”

The issue of excessive complexity in modern cars (and aircraft, or any large IT infrastructure) can be diagnosed easily. The data which travels along the CAN Bus contains all it takes to run a full systemic diagnosis of the system in near real-time. A complexity scan of a system of systems is the best way to identify:

  1. most common sources of complexity concentration (i.e. which sub-systems are the most frequent contributors to overall complexity)
  2. most common sources of fragility
  3. how many more gadgets (functions) can be introduced into an existing platform before things really get tough – this limit is defined by critical complexity which can be measured

Once these have been identified – this may be performed over a period of, say, 3-6 months – engineers will have useful information on how to design the future architectures of car electronics systems and how to alleviate the problems with the current ones. It will not be long before each modern car, aircraft or spacecraft will have on board something like this:

In order to avoid drowning in complexity – just think of how complex the Internet of Things is about to get – complexity must be monitored constantly. Complexity monitoring and (quantitative) management must become part of our lifestyles.

What is Critical Complexity?

Knowledge is an organized and dynamic set of interdependent rules!

An example of a rule:

“If UNEMPLOYMENT increases

This is an example of a fuzzy rule – no numbers just a global trend. Rules can be more or less fuzzy (or crisp) depending on how many experiments (data samples) they are based on.

What makes information fuzzy and less precise is noise and, in general, uncertainty or disorder. A great way to illustrate the concept is by analyzing, for example, a simple phrase, such as this:

This is an example of a simple phrase which is used
to illustrate the concept of critical complexity.

Let’s introduce a few spelling mistakes:

Thos is a n exrmple of a simpcle phrqse whih I s us ed
to illuxtrate the concyept of critizal com plexiuy.

Let us introduce more errors – with some imagination the phrase is still readable (especially if you happen to know the original phrase):

Tais xs a n exreple zf a sempcle phrqee waih I s vs ed
eo illuxtkate the concyevt of crstrzal ctm plexihuy.

An even more:

Taiq xs a n exrepye zf d semicle pcrqee raih I s vs ed
eo ilnuxtkare the cmncyevt tf crstrzaf ctm plsxihuy.

This last phrase is unreadable. All of the original information has been lost. We could say that the phrase before this last one is critically complex – adding a small dose of uncertainty (spelling mistakes) would destroy its structure. Systems which are on the verge of losing their structure simply because one sprinkles a little bit of noise or uncertainty on top, are fragile – they collapse with little or no early warning.

This is precisely why in the case of very large or critical systems or infrastructures, such as multi-national corporations, markets, systems of banks or telecommunication and traffic networks, it is paramount to know how complex they are and how close to their own critical complexity they happen to function.

If you do know how complex your business is, and how far from criticality it finds itself functioning, you have a great early-warning system.

On the extraordinary importance of complexity

Of all physical quantities, energy is probably the most important. Energy expresses the capacity of a body or a system to perform work. Nature works by using energy to transform matter. This is done via processes (physical, chemical, etc.).

However, in order to realize these processes it is necessary to have information. Energy on its own is not sufficient. One must know what to do with it and how to do it. This is where information comes into the picture. Information is stored and delivered in a variety of ways. The DNA, for example, encodes biological information.

Information is measured in bits. Shannon’s Information Theory states that entropy is a measure of information:

Entropy, however, has many facets. While it measures the amount of information necessary in order to describe a system, it also quantifies the amount of disorder contained therein. The above equation is of course for a single variable (dimension). When more dimensions are involved, structure emerges. This is because of correlations, or interdependencies, that form.

Examples of interdependencies are depicted below:

By the way, conventional linear correlations cannot be used in situations like the ones above. In fact, in order to deal with complex and intricate structure in data we have devised a generalized correlation scheme based on quantum physics and neurology. But that is another story.

An example of structure (in 29 dimensions) is shown below. It has the form of a map. The black dots represent the said interdependencies. In general, the map’s structure changes in time.

In Nature, we witness the interplay of two opposed forces: the incessant urge to create structure (using entropy, i.e. information) and the persistent compulsion to destroy it (turning it into entropy, i.e. disorder).

This is represented by the following equation which is also a formal definition of complexity.

In the above equation S stands for structure while E represents entropy. An example of the spontaneous emergence of structure is that of protein folding from a set of amino acids:

Other examples of emergence of structure are biospheres, societies, or galaxies.

An example of structure destruction (structure to entropy transformation over time) is:

This brings us to the key point of this blog

Complexity not only captures and quantifies the intensity of the dynamic interaction between Structure and Entropy, it also measures the amount of information that is the result of structure. In fact, entropy – the ‘E’ in the complexity equation – is already a measure of information. However, the ‘S’ holds additional information which is ‘contained’ in the structure of the interdependency map (known also as complexity map). In other words, S encodes additional information to that provided by the Shannon equation.

In the example in 29 dimensions shown above the information breakdown is as follows:

Shannon’s information = 60.25 bits
Information due to structure = 284.80 bits
Total information = 345.05 bits
In this example, structure furnishes nearly five times more information than the sum of the information content of each dimension. For higher dimensions the ratio can be significantly higher.

Complexity is not just a measure of how intricate, or sophisticated, something is. It has a deeper significance in that it defines the topology of information flow and, therefore, of the processes which make Nature work.

Payment card industry in Vietnam – A systemic risks analysis

From a recent article by the Vietnam Chamber of Commerce and Industry:

Cash remains king in Vietnam but credit card issuers are predicting an imminent boom in the card market with the number of Vietnamese cardholders potentially growing by 10 times the current number of nearly one million.

Nguyen Thu Ha, chairperson of the Vietnam Card Association under the Vietnam Banking Association, said from both the macro-economic and banking perspective, the domestic card market is considered very strong given the rising incomes among the country’s 82-million people, rapid economic growth and improving legal system.

Increasing tourist arrivals and the influx of money remitted home by overseas Vietnamese would also facilitate credit card growth, Ha said at a conference touting the potential for electronic payments in Hanoi last week.

So, in theory things look great for the Vietnamese payment card industry. Let’s see if this is confirmed by a systemic Resistance to Shocks analysis. In other words, instead of analyzing, for example, a single bank issuing credit cards, we will analyze a total of forty banks as a system. The analysis has been performed using publicly available data. The data in question is the following (number of parameters is 35, all data is relative to 2016):
Total number of cards
Number of Domestic Debit cards
Number of International Debit cards
Number of Domestic Credit cards
Number of International Credit cards
Number of Domestic Prepaid cards
Number of International Prepaid cards
Number of Other Cards
Total Cards Revenue
Domestic Debit Card Revenue
International Debit Card Revenue
Domestic Credit Card Revenue
International Credit Card Revenue
Domestic Prepaid cards Revenue
International Prepaid cards Revenue
Other Cards Revenue
Total Card Payment Revenue
International Card payment revenue at card accepting units
International Card payment revenue at ATMs
Domestic Card payment revenue at card accepting units
Turnover of Cash Advances by Domestic Cards at POS
Domestic Card payment revenue at ATMs
Number of ATM until 31/12/2015
Number of ATM until 31/12/2016
Number of POS 31/12/2015
Number of POS until 31/12/2016
Cash Withdrawal
Cash Transfer
Revenue spending at the Card Accepting Unit
Contract Payment Revenue
Other Domestic Cards Revenue
International Card Payment Revenue at Card Accepting Units
Online Payment of International Cards at Card Accepting Units
Domestic Card Payment Revenue at Card Accepting Units
Online Payment of Domestic Cards at Card Accepting Units

The corresponding Complexity Map is illustrated below:

The analysis reveals a very high Resistance to Shocks (RtS), namely 95.6%, which corresponds to a five-star rating.  It is interesting to note that this situation hinges on the following parameters: number of ATM, cash withdrawals, number of international debit cards. Basically, the first four parameters are responsible for nearly 38% of the overall state of health of the system. Any policies aiming at improving or strengthening the payment card industry in Vietnam should target these parameters first.

The complete ranking of parameters in terms of how they impact the situation is reported in the chart below (values are in %).

It is also interesting to analyze the system of forty card issuing banks. The Complexity Map, based on the above data per bank, is illustrated below:

Again, the RtS rating is very high, a staggering 99% with a five-star rating. One must remember, however, that this is an analysis based on payment cards data alone. The curious thing is that the degree of interdependency of this system of forty banks is 87%. This is extremely high. If one looks at the map one realizes that it is very dense. What this comes down to is quite evident – every bank is correlated to almost every other bank. This is not good if the system is exposed to a shock as its effects would propagate very quickly throughout the entire network.
In terms of systemic risk, the banks which are situated at the top of the bar chart shown below are the ones that are most exposed (values in %).

The banks which are exposed the most when it comes to systemic risks are Eximbank, Maritime bank, Ocean bank, VPBANK and CBBank. In case of a shock, these banks will be most vulnerable. In fact, note that they are also the hubs of the Complexity Map, i.e. they have the largest number of inter-dependencies with the other banks.
Based on the 2016 revenue and systemic exposure of each bank, the footprint (weight) of each bank on the system is indicated below (VBANK has a value of 1 as it has the highest revenue and is taken as reference).

What is evident is that VPBANK is critical to the system (i.e. payment card industry). It has the highest revenue, which, combined with a high systemic exposure (number 3 in the previous ranking) turns it into a hub. Any actions, aiming at the improvement, or growth of this particular sector, should be targeted at the banks at the top of this chart.

Gravity, Time and Complexity

In an article entitled “Is gravity the force driving time forwards”, a theory is proposed in order to explain why time evolves only in one direction. The theory attempts to complement the Second Law of Thermodynamics, which has been instrumental in terms of establishing the concept of arrow of time based on entropy.

The article suggests that while entropy explains why a shattered cup won’t spontaneously recompose itself into its original form, it doesn’t explain why the cup exists in the first place. This is due, according to the authors, to the clumping power of gravity. It is them claimed that:
‘Overall, it’s no surprise to
learn that the Universe gets
more complex with time.’
This statement is not entirely correct. In our earlier article “The Fourth Law” we show how in the presence of non-decreasing entropy, complexity does indeed grow but then it peaks at a certain time. This is illustrated in the plot below.
It also suggests that as entropy will erode all structure in the Universe – complexity is zero – there may be a Big Crunch.
The evolution of the post-Big Bang Universe is proven by the fact that we observe increasingly complex structures, planets, stars, pulsars, quasars, galaxies or clusters of galaxies. However, we don’t know where are on the above complexity curve today. Time will show!
Complexity is the bridge which connects structure (S) and entropy (E). Complexity brings together the two strongest and antagonistic forces of Nature – the urge to create structure and the compulsion to destroy it. C=f(S; E).

Jacek Marczyk

Author of nine books on uncertainty and Complexity Management, Jacek has developed in 2003 the Quantitative Complexity Theory (QCT), a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. Read more publications by Jacek

Complexity: A next generation global cyber weapon

The present blog introduces a new exotic application of the QCM technology – the use of complexity as a systemic offensive tool. The goal, therefore, is not to prevent crises or systemic collapses – which is the mission of Ontonix – the objective is to cause them.
Complexity is a systemic characteristic of networks and processes. Since 2005 complexity can actually be measured. It is measured in cbits (complexity bits) and quantifies the amount of structured information ‘contained’ within a system. Every system (network, process) possesses at a given time a certain amount of complexity as well as the so-called critical complexity. In proximity of its critical complexity the dynamics of every system tends to be dominated by uncertainty, becoming chaotic and uncontrollable. This reduces its structural stability, rendering it less resilient hence vulnerable. Systemic collapses happen in the presence of high fragility and high density of interconnections, i.e. in the proximity of critical complexity.
Well-managed systems and processes function at a certain distance from their respective critical complexities. However, this mandates that one be able to measure both complexity as well as critical complexity. This is the business of Ontonix. Managing super huge systems and networks without the explicit knowledge and monitoring of complexity and critical complexity is risky, to say the least.

The objective of ‘complexity as a weapon’ is to reduce/neutralize the overall resilience of the enemy by deliberately introducing harmful targeted and structured information (complexity) into adversarial networks so as to induce fragilities as well as structural instabilities leading, potentially, to systemic/catastrophic collapse.

The goal is to ‘inject’ complexity into adversary’s computers and networks in a surgical manner, damaging or debilitating systems such as PLC, DCS, SCADA, in particular the hubs of those systems, which can quickly propagate on a large-scale the effects of an attack. The goal is to increase network/process complexity to levels in the vicinity of critical complexity, so as to induce fragility, vulnerability and cascading failures. In essence, we’re looking at a targeted alteration of specific sensitive network functions.

Inducing critical complexity levels in strategic networks can offer an effective preemptive measure which can soften the enemy’s critical infrastructures/networks prior to a more conventional attack (cyber or not).

Complexity-based aggression, when implemented on a large-scale level (i.e. when targeted at large networks or interconnected systems of networks) can offer a ‘subtle’ low-intensity and low-visibility intervention in virtue of its highly distributed nature. In other words, instead of a highly concentrated attack, a more diluted action may result potentially difficult to trace and counter and, at the same time, lead to devastating  systemic consequences.

The technical details of ‘complexity as a weapon’ will not be explained in this blog for obvious reasons. However, the rationale is based, in part, on certain observations one can make when studying very large-scale highly complex systems, such as the following:
  • The Functional Indeterminacy Theorem (F.I.T.): In complex systems, malfunction and even total non-function may not be detectable for long periods, if ever.
  • The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in failure mode.
  • A complex system can fail in a very large number of ways. Higher complexity means a system possesses more failure modes.
  • The larger the system, the greater the probability of unexpected failure.
  • Our Quantitative Complexity Theory has verified the above statements on an empirical and numerical basis (science, not opinions, has always been our motto).

    When it comes to complex systems – by the way, before you say something really is complex you should actually measure its complexity – failure isn’t always something obvious and may even be difficult to design. In fact, there are many ways, modes, in which such systems can fail (or be made to fail). In reality, failure is a combination (superposition) of various failure modes. Some of these modes can be quite likely, some can require high energy expenditure in order to trigger them, some can be triggered with little effort but may require an unlikely set of circumstances.

    This means that it may be possible to provoke the collapse of large networks/systems by identifying first what their failure modes are and, in each mode to pinpoint the key variables (nodes) that can cause a cascading failure. Once these nodes have been identified, that’s where the attack should be concentrated. The way this is accomplished is not intuitive. It is not sufficient to ‘invert’ the conventional QCM-based process of system ‘robustification’ in order to arrive at the complexity-as-a-weapon logic which induces systemic fragility. What is certainly needed, though, is plenty of supercomputer fire power.

    Who would be the target of a large-scale systemic ‘complexity attack’? Rogue states that are threatening global peace and support terrorism.

    Jacek Marczyk

    Author of nine books on uncertainty and Complexity Management, Jacek has developed in 2003 the Quantitative Complexity Theory (QCT), a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. Read more publications by Jacek