Categories
f. How to Gain Understanding

How to Gain Understanding

Introduction

People understand the world through pattern recognition. Recurring patterns of events attract our attention, we remember them, attach meaning to them, and later use them as an aid to predicting the world. This trait has evolved to help our survival and the propagation of our genome. Non-recurring events are of lesser interest as they do not permit prediction. We are, therefore, less likely to remember and attach meaning to them.

Causality as a basis

Such recurring patterns of events have their basis in causality. It is likely that our perception of the latter has a hereditary basis. Certainly, other animals seem to understand causality, as evidenced by Pavlov’s famous behavioural experiments. Also, we have probably all experienced a young child repeating the question “why?”. This is probably him or her exercising hereditary skills in the recognition of causality.

Recognition

Noticing these patterns is highly tentative at first. We merely notice similarities between events and feel an intangible sense of order. We do not have the words to describe what we notice, and it is not integrated into our general worldview. However, as our brains absorb the new information and make the necessary connections our understanding grows, and we can find words to communicate the insights. A general rule forms that we can use predictively. Unfortunately, this can be a slow process often involving several nights of good sleep and some research into the topic. This is effectively the same as the creative process of saturation, incubation, inspiration, and verification described in an earlier article, but with saturation replaced by experience.

We can also seek the fundamental origins of the recurring patterns that we observe. For example, the very concept of causality was discovered in this way. Patterns were recognised and causality was recognised as another pattern within them.

Limitations

When we seek meaning we are essentially attempting to understand a pattern that describes the universe in its entirety. Unfortunately, however, pattern recognition is limited by our cognitive abilities. The principle of darkness applies, and our minds are simply not complex enough to model such a pattern. We can only recognise relatively simple ones such as causal relationships and feedback loops, and even those with difficulty. If there is any meaning to the universe, then it is certainly beyond our ability to perceive it. It would be more sensible to recognise this, rather than invent simplistic or mystical explanations. In practice, we must satisfy ourselves with understanding small parts of the world around us. For example, the purpose of this blog, is to convey an understanding of human nature and society.

Explanation

As explained above, to understand a recurring pattern, it must be integrated into our general worldview. Obviously, if our worldview is a mystical or religious one, then we may give those patterns an explanation of that type. On the other hand, we will give the patterns a scientific explanation if our worldview is of that nature.

Feedforward

The process of predicting events and acting proactively is known in systems science as feedforward. This term is also used in personnel management to describe the training of staff to meet future business needs. The term feedforward suggests that it is the negative of feedback. However, this is only so in the sense that feedback is reactive to past events, whilst feedforward is predictive of future events. Feedforward relies on a knowledge of causal patterns. It is, therefore, a feature of agents or of systems created by agents.

How to Use this Process

We can reverse this recognition process. This involves designing a causal pattern and then looking for it in the world around us. Another approach is to generalise theories from specialised fields into general causal patterns. Once a pattern, for example the replication of information, has been created, we can then look for manifestations of it in the real world. In this way we may, for example, notice cellular division, the viral spread of misinformation on the internet, and so on. As explained in the previous article, there are many ways in which information can be altered during replication. So, two copies of the same information can contain contradictions. This in turn can lead to competition regarding which is correct, and, as will be described in a future article, to conflict. From this model it is possible to suggest reasons for real world events such as conflicts between closely related religious factions, etc.

In different fields and specialities, different words are often used for similar concepts. This tends to obscure similarities between the causal processes involved. However, once we have a pattern in mind, its recognition in the real world or in another field of expertise becomes much easier.

Categories
c. Further Principles of General Systems Theory

Further Principles of General Systems Theory

I will describe General Systems Theory in more detail in the next few articles, and then provide a systems based model which can be used to understand human society, how it works, and why it sometimes fails. This model uses the principles described below.

Near Decomposability. Many natural and artificial systems are structured hierarchically, and their components can be seen as occupying levels. At the highest level is the system in its entirety. Its components occupy lower levels. As we move down through the levels we encounter ever more, smaller, and less complex components. The rates of interaction between components at one level tend to be quicker than those at the level above. The most obvious example of this is the speed with which people make decisions. An individual can make decisions relatively quickly, but the rate steadily slows as we move up the hierarchy through small groups, organisations, and nations, to global society.

Sub-optimisation. This principle recognises that a focus on optimising the performance of one component of a system can lead to greater inefficiency in the system as a whole. Rather the whole system must be optimised if it is to perform at maximum efficiency. Its components must sometimes operate sub-optimally.

Darkness. This principle states that no system can be known completely. The best representation of a complex system is the system itself. Any other representation will contain errors. Thus, the components of a system only react to the inputs they receive, and cannot “know” the behaviour of the system as a whole. For the latter to be possible then the complexity of the whole system would need to be present in the component. The expression “black box” is used to describe a system or component whose internal processes are unknown, and “white box” to describe one whose internal processes are known. Most systems are, of course, “grey boxes”.

An interesting question arises from the principles of near composability and darkness. As explained in previous articles, human beings are motivated by needs and contra-needs. The question is, of course, whether groups of individuals, species, and ecosystems also have needs and contra-needs which differ from their individual members. Are reduced birth rates, for example, a natural species response to population pressures? If so, then near decomposability implies that, because groups, species, and ecosystems are more complex systems than single individuals, the processes which satisfy those needs will proceed more slowly. Darkness implies that as individuals we would be unable to “know” the processes involved, although as a society we might.

Equifinality. The processes in a system can, but do not necessarily, have an equilibrium point, i.e., a point at which the system normally operates. If, for any reason, the processes are displaced from it, then they will subsequently alter to approach that point once more. This characteristic is known as homeostasis. Thus, a given end state can be reached from many initial states, a feature known as equifinality. For example, if a child’s swing is displaced from the vertical and released, then, after swinging to and fro for a while, it will eventually return to the vertical.

Multifinality. It is possible for the processes in a system to have more than one stable point. If a process is displaced a little from one of them, it may ultimately return. However, if it is displaced too far, then it may subsequently approach another equilibrium point. This is a feature of natural ecosystems. If they are damaged in some way, they will ultimately return to a stable state. However, this state will often differ from the earlier, damaged, original.

Dynamic Equilibrium. This principle is like that of equifinality but applies to rates of change in systems. Some systems are dynamic and have a stable rate of change. If displaced from that rate of change for any reason, they will ultimately return to it. This is known as homeorhesis, a term derived from the Greek for “similar flow”. Again, a dynamic system may have several stable rates of change.

Relaxation Time. Relaxation means the return of a disturbed system to equilibrium. The time it takes to do so is known as the relaxation time.

Circular Causality or Feedback. Feedback occurs when the outputs of a system are routed back as inputs, either directly or via other systems. Thus, a chain of cause and effect is created in the form of a circuit or loop. The American psychologist Karl Weick explained the operation of systems in terms of positive and negative feedback loops. Systems can change autonomously between stable and unstable states depending on the dominant form of feedback. Feedback is, therefore, the basis of self-maintaining systems which will be discussed in the next article.