“Artificial intelligence (AI) is intelligence exhibited by machines”. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
The idea is to make machines (SW, HW) function in a manner similar to humans, automating the capacity that humans have to learn, make decisions, to accumulate experience.
What can we do to take AI to another level, to provoke a quantum leap?
AI algorithms fall into many classes. Wikipedia reports these:
- Search and optimization
- Probabilistic methods for uncertain reasoning
- Classifiers and statistical learning methods
- Neural networks
- Deep feed-forward neural networks
- Deep recurrent neural networks
- Control theory
- Evaluating progress
Most of the above techniques, not to say all of them, are pretty old, and anyone who has some grey hair has seen this stuff 20-30 years ago. We all agree that these techniques are quite successful and that AI is still in its infancy. But that is not the point of this blog.
The question we wish to ask is this: what, at this point, can we do to make AI less artificial and more intelligent, almost indistinguishable from humans? What can we do to take AI to another level, to provoke a quantum leap?
Let us first examine what the word “intelligence” means, let us look at its etymology. From Latin “intelligentia”: understanding, knowledge, power of discerning, from assimilated form of inter “between” ( inter) + legere “choose, pick out. Let us focus on the “choose”, “pick out” part in conjunction with the “inter”, i.e. between. Humans make choices, select options, weigh scenarios and discriminate between different solutions, hundreds if not thousands of times a day. Sometimes this is done using tools, in other cases based on gut, on intuition or experience. It is this capacity to select an option or a strategy that makes humans unique. Life, with all its complexities and nuances, offers almost infinite ways and means of setting goals and then reaching them. No two people will do things in exactly the same manner.
Now we don’t want to get too philosophical here. The idea is to simply state that a generic problem solving process goes more or less like this:
- Problem statement (definition)
- Verification if the problem actually has a solution
- Selection of solution method (there may be many)
- Verification of result
This is of course a very gross description. A generic example is illustrated below. Suppose that one has to cross the network (or some domain) from left to right.
Suppose that we identify three possibilities represented by the three paths shown in the figure. Suppose, just for the sake of discussion, that each of these paths entails very similar energy expenditure, time, cost, risks, etc. Which path would you choose? All things being equal (or not necessarily all) an experienced and wise individual (or a good engineer!) would probably select the least complex alternative. Humans instinctively imagine multiple scenarios and assess their complexity trying to stay away from the ones that will potentially make life complex in the next few minutes, days, or years. This logic applies to running a family, a corporation or a battle scenario. High complexity leads to fragility.
The key, therefore, is to be able to measure complexity. Since 2005, thanks to Ontonix this is possible. We have now an Italian Standard, the UNI 11613, which shows corporations how to do a “Business Complexity Assessment” and work is in progress to deliver a similar ISO standard, the ISO 22375 (“Guidelines for Business Complexity Analysis”).
The bottom line is that today we have a consolidated technology known as the QCM, or Quantitative Complexity Assessment. QCM provides measures of complexity, not sensations. The way QCM works is simple. Suppose you need to design a turbine and you come up with two candidate designs, which may be represented by the two Complexity maps shown below. The maps illustrate which parameters of our turbine are correlated with which other parameters. More correlations – interdependencies – means the system is intricate, difficult to understand and to fix. The first solution has a complexity of 3.03 cbits. It has 12 correlations between the 10 parameters.
The more complex solution, shown below, has a complexity of 5.2 cbits and has 19 correlations between the said ten parameters.
Providing both solutions are acceptable, the less complex one is clearly the better choice. This is intuitive.
We believe that AI-based systems should incorporate a QCM layer which would measure the complexity of the solutions provided by the computational kernel – clearly the kernel would need to provide multiple solutions – so as toselect the least complex ones that satisfy objectives and constraints. If done in real-time, it will be difficult to distinguish man from machine.
One important difference between man and machine is that humans are capable of original and creative ideas. It is very difficult to hard-wire something like that into an algorithm. But QCM can help here too. The two simple complexity maps illustrated above are in reality topological sums of a number of other maps called modes or attractors. These modes may be selected and assembled in a myriad of ways, some of which may be counter-intuitive or, simply, original. In spaces having a large dimension, the number of such modes can be very large indeed.
In essence, we propose to move from AI to AI+QCM. Imagine what great benefits can be obtained if AI, which will penetrate the industry, pervade our lives, our homes, would reduce complexity wherever possible. Imagine, for example, driving strategies for autonomous vehicles that reduce traffic complexity. Just imagine how less complexity can mean more efficiency, less delays, less waste and less risk.
Our world is quickly getting more complex
We measure every year the complexity of the world as a system based on over 250 000 parameters published by the World Bank. We can say that today the world is approximately 500% more complex than in the early 1970s. Moreover, we have created technologies that are rapidly increasing complexity everywhere. Think of the Internet of Things. How far do we think that we can take things without actually managing complexity? Can we just grow to be more complex with impunity? Certainly not. There exists the so-called critical complexity, which is a sort of Pandora’s Box, except that it tells you how far you can go. You need to stay away from critical complexity if you want to avoid a systemic collapse. AI, in conjunction with QCM can play a crucial role, not just in delivering sexier, more human-like solutions to a bunch of problems, but it can also help our global society to stay on a path of resilient sustainability.
Enjoyed this publication? Here is more by Jacek Marczyk!
Who rates ratings? - Failure is often not contemplated in a model. No model-building laws forces one to do so. See how this problem lead to catastrophes!
In Math we Trust. That is precisely the problem! - How resilient is your business? Can you afford not to know? The economy is a dynamic system which is far too complex for us to understand.
Systemic Resilience Analysis: Supercomputers provide new tools for regulators, investors and governments - Discover Quantitative Complexity Theory, a different approach and a new set of analytical tools to address modern day challenges.
ABOUT THE AUTHOR - Jacek Marczyk, author of nine books on uncertainty and complexity management, has developed in 2003 the Quantitative Complexity Theory (QCT), Quantitative Complexity Management (QCM) methodologies and a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. He introduced the Global Financial Complexity and Resilience Indices in 2013. Since 2015 he is Executive Chairman of Singapore-based Universal Ratings. Read more publications by Jacek Marczyk