The goal is to ‘inject’ complexity into adversary’s computers and networks in a surgical manner, damaging or debilitating systems such as PLC, DCS, SCADA, in particular the hubs of those systems, which can quickly propagate on a large-scale the effects of an attack. The goal is to increase network/process complexity to levels in the vicinity of critical complexity, so as to induce fragility, vulnerability and cascading failures. In essence, we’re looking at a targeted alteration of specific sensitive network functions.
Inducing critical complexity levels in strategic networks can offer an effective preemptive measure which can soften the enemy’s critical infrastructures/networks prior to a more conventional attack (cyber or not).
Complexity-based aggression, when implemented on a large-scale level (i.e. when targeted at large networks or interconnected systems of networks) can offer a ‘subtle’ low-intensity and low-visibility intervention in virtue of its highly distributed nature. In other words, instead of a highly concentrated attack, a more diluted action may result potentially difficult to trace and counter and, at the same time, lead to devastating systemic consequences.
Our Quantitative Complexity Theory has verified the above statements on an empirical and numerical basis (science, not opinions, has always been our motto).
When it comes to complex systems – by the way, before you say something really is complex you should actually measure its complexity – failure isn’t always something obvious and may even be difficult to design. In fact, there are many ways, modes, in which such systems can fail (or be made to fail). In reality, failure is a combination (superposition) of various failure modes. Some of these modes can be quite likely, some can require high energy expenditure in order to trigger them, some can be triggered with little effort but may require an unlikely set of circumstances.
This means that it may be possible to provoke the collapse of large networks/systems by identifying first what their failure modes are and, in each mode to pinpoint the key variables (nodes) that can cause a cascading failure. Once these nodes have been identified, that’s where the attack should be concentrated. The way this is accomplished is not intuitive. It is not sufficient to ‘invert’ the conventional QCM-based process of system ‘robustification’ in order to arrive at the complexity-as-a-weapon logic which induces systemic fragility. What is certainly needed, though, is plenty of supercomputer fire power.
Who would be the target of a large-scale systemic ‘complexity attack’? Rogue states that are threatening global peace and support terrorism.
Jacek Marczyk
Author of nine books on uncertainty and Complexity Management, Jacek has developed in 2003 the Quantitative Complexity Theory (QCT), a new complexity-based theory of risk and rating. In 2005 he founded Ontonix, a company delivering complexity-based early-warning solutions with particular emphasis on systemic aspects and turbulent economic regimes. Read more publications by Jacek
Hello Rich, how about a Skype conversation in the next few days?
Howdy Jacek – Absolutely. I’m in Atlanta Eastern Time and glad to be available at your convenience. This Friday (Nov 24) would be fine. Or any time thereafter.
That sounds like fun. Oddly, I was surprised as a youth that these types of conversation were not very supported at cocktail parties:).
Hello Rich. Thanks much for your comment! All our work in complexity is of quantitative nature which means that all our statements on complexity and its properties are backed by actual (numerical) experiments. What we have observed over the past decade, dealing with thousands of different systems, is that larger systems – i.e. systems which are described by very many variables, thousands, millions – have many more ways to fail. These failure modes are often non-intuitive. Many modes of failure require very little energy to trigger. Think of the number of deadly diseases a human can be exposed to. Certainly far more than a (simpler) insect.
I believe the issue you are treating is that of granularity, or coarse-graining. Essentially, how many variables you chose to describe a given system. You can go down to quark level or you can simply stay at molecular level. It all depends on your computational budget, the precision and resolution of your observational instruments. Also, it depends greatly on the objective of your study.
If you are interested in a more detailed discussion let me know.
Jacek
Greetings Jacek and thanks much for the offer to continue. Unfortunately, I do not have the mathematical chops to truly add to the conversation, but as usual, that won’t stop me from blabbering. I’ll offer my amateur view for whatever value it can provide. I’ll lay out some brain dandruff for your consideration as the spirit moves you.
You are absolutely correct on granularity. My thinking is
Code is comprised of, to some extent whether binary or quantum, grains and superstructures, or connective tissues. In XML, the tags would be the grains and XSLT (or my fav, SVG) would be the superstructure, or what I tend to call business rules. The grains and the BRs, combined with some predestined outcome (say a graphically based inventory control system) make software.
Not sure this is accurate, but I would think attacking software (especially if we use a really broad definition of software as pretty much everything but a straight out physical attack) requires either attacking the grains (the virus model?) or attacking the business rules.
When a system is operating nominally, there will be satisfactory outcomes when the software cycles through its routines. I would find the property on the shelf where it is listed. Impacting a grain might result in a hammer showing up where a nail should be. Impacting a BR might count all hammers as nails.
Let’s start with a DB of hammers and nails. If we use boids (flocking) to measure the stability of the system (dendrogram) – all hammers need to flock together as do all nails, it provides the verification of emergence meeting the objective. I want to know about every hammer and all hammers, especially if I can do that through a third party (the software/emergence). I can check the grains by observing history of clustering. If the hammers suddenly slide to a new (topologic) location, I know something is up.
I can check the BRs by some version of this – Shannon’s Info Theory shows heterogeneity as the top of a bell curve. Seems like as long as the BRs are nominal, the curve will be pretty flat (not interesting in Shannon’s terms). If a BR goes out of whack, the curve should tilt up, become more interesting.
An attack would show pretty readily if the flocks or the system gets more interesting.
Seems like these days tools like acyclic graphs and/or blockchain could have a lot of impact on software security. I can imagine smart contracts at the BR level. The reason I like the idea of looking at flocking and the slope of a tangent to the “interesting” curve is that it seems computationally cheaper…?
My mind model is – you watch a pot of water on low boil. It’s very consistent. If suddenly, one quadrant boiled more vigorously or stopped boiling, it would seem odd. I think the first step would be, run the proposed tool and add flocking. After the flocks stabilize, you’d be able to know how the steady boil looks. Any unexpected emergent behavior (an attack) would be really difficult to hide. I guess after that, you just figure out why it’s not boiling as expected – easy!
Greetings,
I have just read Jacek Marczyk’s post “Complexity: A next generation…”
My work involves the organizational management/change aspects of technology integration, mainly managing fear of change and the occasional plain old laziness.
Far from sufficiently technical skill in this area, but that’s never stopped me from stating my opinion.
The actual comment is related to the line
“The larger the system, the greater the probability of unexpected failure.”
As a GIS person, I’ve played with scale all my life. I see some path through complexity, especially failure scenarios, by redefining the system. Instead of one large, complex system, many small (and somewhat less complex) systems. I rarely measure how well the system (in my case, moving SMEs for a government agency to a content management system) is functioning. I have focused on making sure the SME is functioning. That means of course, I have to “personally” ensure the supporting systems are functioning, so in the project manager seat I really do get to view the known and visible system, but I don’t spend much time measuring it. As an example, this week I reported that an SME had submitted an image and w/o being instructed, included the image caption. My theory, that I’ve somehow managed to convince my management of as well, is that once the seed is planted and nurtured a bit, the system will emerge.
The problem with defining a system as complex is that the system actually reaches down to the quark and below level, finally reaching quantum uncertainty (at least as we understand it currently). Combined with chaos theory, one never knows what quark will start the avalanche, so to speak. In that view, we’re stuck at Zeno’s paradox to move forward.
In my case, I keep a close eye on negative comments about the system and try to seek out the specific SME to correct that behavior or gather a new insight. I suppose I could say that if the number of negatives exceeds my ability to correct, the system is either insufficiently designed or has entered a catastrophic failure phase, in which case I’ll get out my notebook and record new and novel ideas that will emerge.
Thanks!
Rich Hammond