Categories
b. The VUCA Environment

The VUCA Environment

VUCA is a term first coined, in 1987, by the American economists Warren Bennis and Burt Nanus. It refers to the environment as being volatile, uncertain, complex, and ambiguous.

In a volatile environment, the nature of change can quickly alter, and the speed of change can be rapid. The classical example is, of course, stock market prices, but volatility also applies in other social arenas, for example the political arena when a scandal breaks.

In an uncertain environment, events and the outcomes of actions are unpredictable, can come as a surprise, and previous experience may not apply. Weather is an example in which unexpected droughts or deluges of rainfall occur.

Complexity refers to the way in which everything in the environment is causally inter-related. There may be no single cause resulting in a single effect, but rather multiple causes and effects that defy analysis. When situations are complex, a change in one place can have unintended consequences elsewhere. Chaos theory can also apply. For example, a small change in the behaviour of one individual can propagate through a crowd to completely alter its behaviour.

Finally, ambiguity refers to a lack of understanding or a misreading of the situation. Facts are unclear and cause and effect may be confused. This typically applies to the interpretation of historical events. Different historians can give different explanations based on different interpretations of the available information. For example, the two parties in a territorial dispute may both believe that their claim is reasonable due to different historical interpretations.

VUCA is a product not only of our inability to understand complexity and our inability to precisely model it, but also a product of genuine random events at the atomic and sub-atomic level. Examples of the latter are the radioactive decay of atoms and the appearance of virtual particles. Such events interact with the physical universe, and the change that they cause is magnified as it propagates ever more widely.

The VUCA concept can be used as an excuse for inaction and a lack of forward planning. However, the advantage of accepting it as reality is that we can better identify the risks associated with our actions and have measures ready if things do not go as we had hoped.

Unfortunately, we have an optimism bias and often underestimate the difficulties and risks involved in a project or enterprise. This is particularly the case when promoting a pet project to others. However, on the other hand, a greater awareness of the VUCA nature of reality can lead to a greater understanding of the knowns and unknowns in a situation. It also leads to the identification of potential surprises, and, where appropriate, trigger action to clarify any critical unknowns. Finally, it can lead to a better understanding of the potential threats and opportunities in a situation, and, where appropriate, lead to the planning of measures to avoid those threats or seize those opportunities.

A good understanding of an organisation’s vulnerabilities will enable it to plan resilience measures which limit damage in the face of the unexpected. A good understanding of an organisation’s objectives will better enable it to seize opportunities should they arise.

Clearly, this requires an organisation to be agile, flexible, and adaptable in the face of the unexpected. It also requires it to have a range of interventions, mitigation measures, plans B and C, etc., available should a change of direction become necessary. Finally, it requires the organisation to carefully monitor situations and the outcomes of its decisions.

This also applies to us as individuals. For example: we insure our homes, cars and holidays against the unexpected; we wear safety equipment when playing sports; we maintain cash reserves in the bank to see us through difficult times; and so on.

In the absence of such measures and in a VUCA world, organisations will inevitably run into difficulties and ultimately fail. A failure to recognise the VUCA world is one of the main reasons why government projects so often fail. In 2017, PricewaterhouseCoopers AG of Switzerland investigated the reasons for this. They produced a report entitled “Are public projects doomed to failure from the start?”. They found that the complexity of such projects was often underestimated, and an overoptimistic attitude would prevail. In practice, however, the political, organisational, and technical complexity of a project could render it unmanageable. They also found that deadlines were often set for political reasons, and political agendas could lead to an unwillingness to abandon projects that no longer fitted the business case. Furthermore, it was often the case that many different organisations would need to co-operate, but their IT systems differed, and they could resist the necessary changes to their practices. PricewaterhouseCoopers did, however, find that with proper management and diligence none of these factors were insurmountable.

Similar problems arise with government policy interventions. Like everyone else, the ability of politicians to understand complexity is limited. So, in practice the process of intervention is one of innovation, trial, and error. In other arenas there may be many actors some of whom will succeed and others of whom will fail, so trial and error is acceptable. However, government differs from the rest of society in that it is the sole actor and there is just one trial. Unfortunately, it is usually inexpedient for a politician to admit to error. So, government error is often only corrected when the opposition takes power.

On the positive side, many Western governments are now recognising the VUCA world and putting measures in place to better manage their function in its light. Recent guidance on managing complexity in the UK can be found at https://www.gov.uk/government/publications/systems-thinking-for-civil-servants

Categories
f. Systems Theory from a Cognitive and Physicalist Perspective

Systems Theory from a Cognitive and Physicalist Perspective

In June, 2022, I commented in my final post on causality and systems theory that General Systems Theory was not as well developed as I had hoped. So, more work was required before I could make further posts on the topic.

That work is now complete. However, the resulting article is too long for a single blog post and cannot be broken down into a series. So, I have produced it in pdf form and it can be downloaded here.

It updates my earlier articles and pdfs “Joining Up the Dots”, “How we Understand a Complex Universe”, “The Importance of Information” and “What is Information at Source”. So, these have been deleted from the website.

A brief description of the article follows.

The cognitive perspective holds that we are our minds and cannot escape the constraints imposed by their biology and evolutionary history. Nevertheless, human cognition is a reasonably accurate representation of reality. Physicalism holds that space-time comprises the whole of reality and that everything, including abstract concepts and information, exists within it.

From this perspective, I describe some of the main concepts in systems theory. They include: the importance of structure in forming meaningful systems; the nature of relationships, causality, and physical laws; and the significance of recursion, hierarchy, holism, and emergence. I also discuss cognitive factors including: our mental limitations; the nature of information and language; and our search for knowledge in a world of complexity and apparent disorder.

The article concludes with the implications of this perspective for General System Theory and Social Systems Theory and suggests further work to advance these disciplines.

The article has been written up in the style of an academic paper because I will submit it to relevant journals in the near future. However, I have used plain English and explain my ideas in a step by step manner. There are also many diagrams which help to illustrate them.

I hope that you find the article interesting and enlightening.

Categories
i. A Systems Model of Human Organisation (Part 2)

A Systems Model of Human Organisation (Part 2)

This post is part two of the article begun last week. Due to its length, I have split the article down into three posts, but if you would like to read it in one sitting a copy can be downloaded here https://rational-understanding.com/my-books#systems-model.

Internal Feedback

Adapting internal processes involves an internal feedback loop in which the command component’s role is to:

  1. gather information from subordinate components. This information is subject to darkness and miscommunication. Darkness implies that the full picture can never be known. Miscommunication may involve subordinate components providing  misinformation or failing to supply relevant facts. Thus, the role of the command component is also to ensure that the supply of information is relevant and policed.
  2. issue instructions, laws, rules, regulations, norms, etc. to subordinate components and to police them. As will be explained later, ideally, this should also include rules to prevent negative competition. There can be difficulties when a command component polices itself, and thus, in a democracy for example, law-making and enforcement are separated.

External Feedback

Influencing the organisation’s external environment also involves a feedback loop. Outputs from the organisation act as inputs to other organisations in the environment. These may then be processed to yield the original organisation’s desired inputs. At its simplest level, an individual may pay for, or in some other way trade for food. At a higher level, a business may lobby government for reduced taxation or regulation. These external feedback loops are what bond levels in the organizational hierarchy together into society.

Each component organisation’s demand for inputs is a motivator. If, at the level in which external feedback occurs, other component organisations share the same motivator, they can act in one of three ways:

  1. Negative Unilateralism. The organisation acts unilaterally and in negative competition with others. The terms unilateral and multilateral are normally associated with international affairs, but here they are used more generically. Negative competition involves preventing competitors from achieving their goals. It includes but is not limited to the provision of misinformation about either organisation’s motivation, abilities and intentions. In this scenario, each organisation strives for its inputs from what may be a limited resource, and no functioning parent organisation emerges. Because negative competition leads to inefficiencies, the full potential benefits are unlikely to be achieved. Finally, open conflict can arise. It is notable that this largely reflects the state of global organisation today.
  2. Positive Unilateralism. The organisation acts unilaterally and in positive competition with others. Positive competition occurs when competitors each strive to be the best, as in the case of a running race. It leads to a  recognition of which component is best suited to what function. This, in turn, leads to co-operation. Each component finds the niche to which it is best suited and/or in which it is the most efficient. Thus, a functioning parent organisation with a command component ultimately evolves. On average, each component organisation will gain greater benefits than the previous option. However, sub-optimisation applies, and the benefits may not be as great as for those who are overwhelmingly successful in negative competition.
  3. Multilateralism. The organisation acts in co-operation with others. In this case a parent organisation with a command component is designed. The European Union is an example. However, because each component organisation strives for efficiency, there is a risk that they will exploit others, rather than contribute to the common effort. This would reduce the benefits for all.

In practice, the above options exist as points on a scale. There are numerous intermediate points between options a and b, and between options b and c, which depend on the attitudes and decisions of the component organisations.

Because we are a eusocial species, we must balance individual or unilateral action in our short-term interest with communal or multilateral action yielding longer-term benefits. For every organisation, there is an optimum efficiency which can be achieved by using positive unilateralism or multilateralism where appropriate. Nations with conflict between the political left, who favour collectivism, and the right, who favour individualism, should take note.

Optimisation applies to an organisation that acts unilaterally. If an organisation acts multilaterally, then we must rise up through the hierarchy until we reach either the global system or a parent or grandparent acting unilaterally. The requirement for optimization then cascades down through component and sub-component organisations, which may then need to operate sub-optimally.

When influencing its external environment, the role of the command component of an organisation is to:

  1. gather information from the external environment. In the systems model, this information is an input, which itself must be sought by influencing the external environment.
  2. make decisions in the interest of the relevant organisation as a whole. The relevant organisation may be the one commanded, its parent, or its grandparent, whichever operates unilaterally.
  3. manage the balance between unilateral and multilateral action to optimise the efficiency of the relevant organisation.
  4. issue commands to sub-ordinate components for the necessary outputs.

This article will conclude with part 3 next week.