f. How to Gain Understanding

How to Gain Understanding


People understand the world through pattern recognition. Recurring patterns of events attract our attention, we remember them, attach meaning to them, and later use them as an aid to predicting the world. This trait has evolved to help our survival and the propagation of our genome. Non-recurring events are of lesser interest as they do not permit prediction. We are, therefore, less likely to remember and attach meaning to them.

Causality as a basis

Such recurring patterns of events have their basis in causality. It is likely that our perception of the latter has a hereditary basis. Certainly, other animals seem to understand causality, as evidenced by Pavlov’s famous behavioural experiments. Also, we have probably all experienced a young child repeating the question “why?”. This is probably him or her exercising hereditary skills in the recognition of causality.


Noticing these patterns is highly tentative at first. We merely notice similarities between events and feel an intangible sense of order. We do not have the words to describe what we notice, and it is not integrated into our general worldview. However, as our brains absorb the new information and make the necessary connections our understanding grows, and we can find words to communicate the insights. A general rule forms that we can use predictively. Unfortunately, this can be a slow process often involving several nights of good sleep and some research into the topic. This is effectively the same as the creative process of saturation, incubation, inspiration, and verification described in an earlier article, but with saturation replaced by experience.

We can also seek the fundamental origins of the recurring patterns that we observe. For example, the very concept of causality was discovered in this way. Patterns were recognised and causality was recognised as another pattern within them.


When we seek meaning we are essentially attempting to understand a pattern that describes the universe in its entirety. Unfortunately, however, pattern recognition is limited by our cognitive abilities. The principle of darkness applies, and our minds are simply not complex enough to model such a pattern. We can only recognise relatively simple ones such as causal relationships and feedback loops, and even those with difficulty. If there is any meaning to the universe, then it is certainly beyond our ability to perceive it. It would be more sensible to recognise this, rather than invent simplistic or mystical explanations. In practice, we must satisfy ourselves with understanding small parts of the world around us. For example, the purpose of this blog, is to convey an understanding of human nature and society.


As explained above, to understand a recurring pattern, it must be integrated into our general worldview. Obviously, if our worldview is a mystical or religious one, then we may give those patterns an explanation of that type. On the other hand, we will give the patterns a scientific explanation if our worldview is of that nature.


The process of predicting events and acting proactively is known in systems science as feedforward. This term is also used in personnel management to describe the training of staff to meet future business needs. The term feedforward suggests that it is the negative of feedback. However, this is only so in the sense that feedback is reactive to past events, whilst feedforward is predictive of future events. Feedforward relies on a knowledge of causal patterns. It is, therefore, a feature of agents or of systems created by agents.

How to Use this Process

We can reverse this recognition process. This involves designing a causal pattern and then looking for it in the world around us. Another approach is to generalise theories from specialised fields into general causal patterns. Once a pattern, for example the replication of information, has been created, we can then look for manifestations of it in the real world. In this way we may, for example, notice cellular division, the viral spread of misinformation on the internet, and so on. As explained in the previous article, there are many ways in which information can be altered during replication. So, two copies of the same information can contain contradictions. This in turn can lead to competition regarding which is correct, and, as will be described in a future article, to conflict. From this model it is possible to suggest reasons for real world events such as conflicts between closely related religious factions, etc.

In different fields and specialities, different words are often used for similar concepts. This tends to obscure similarities between the causal processes involved. However, once we have a pattern in mind, its recognition in the real world or in another field of expertise becomes much easier.

e. The Systems Approach to Communication

The Systems Approach to Communication

Communication is about the transfer of information. The latter is held in the way that matter or energy is organised. A key feature of information is that it can be replicated, whilst matter and energy cannot, i.e., organisation in one place can be copied to another. The term “replication” is used because information is established in the latter, whilst also being retained in the former.

An example is cellular reproduction. Information is held in a cell’s DNA and provides a template for the way in which the cell is formed and functions. DNA is an interlocking double helix. Each individual helix or strand contains the necessary information. Before a cell divides, its DNA is replicated firstly by splitting into the two strands. The matching strand for each is then fabricated from chemicals in the DNA’s cellular environment. When the cell divides each carries a copy of the original DNA and, thus, information in the original cell is replicated. This is just one example. Similar processes exist throughout the living world and are essential for the propagation of information, including human knowledge and beliefs.

The Shannon-Weaver model of communication identifies five key components: the sender, the encoder, the channel, the decoder, and the receiver. Shannon explained miscommunication by introducing the concept of noise in the channel. However, this neglected other ways in which human communication can fail.

The principle of discrete minds denies the existence of telepathy, i.e., the ability of one mind to transfer information directly into another. Rather, each person must translate his knowledge into one of many languages, and transmit it via a medium of communication, for example a book, an email, or speech. The recipient must then acquire knowledge from that medium by translating from language into meaning and remembering the latter.

In the case of human communication, Shannon’s sender is the original source of the information, i.e., someone’s memory. The encoder is the same person translating his memory into an encoded form, e.g., speech, text, etc. The channel is a medium of communication, such as sound, a book, the internet, etc., which holds the encoded information, making it accessible to others. Sometimes information is held temporarily by the medium, as in the case of speech. Other times it is held more permanently, as in the case of a book. Shannon’s decoder is someone else who translates the codified information into information that is meaningful to him. Finally, the receiver is the ultimate destination of the information, i.e., the memory of the decoder.

It can be seen from this process that there are opportunities for replication. A book can be duplicated several thousands of times, and speech can be heard by several individuals.

However, human communication is not inevitable. If someone holds information, this does not necessarily imply that they communicate it. It is very common for information to be withheld, and there are numerous reasons for doing so. For example, it may confer advantage to a competitor, it may be of little importance, or it may overload the processing capacity of the recipient.

The relevant information can, of course, be false at source. However, even if it is true, there are several ways for it to degrade and become false during the communication process.

  1. If someone communicates information, this does not necessarily imply that he believes it. He may be lying, or to put it more politely, providing misinformation.
  2. Errors can arise during encoding by, for example, a poor choice of words.
  3. As Shannon points out, there can be noise in the channel of communication. Noise is anything which can alter information during its transmission. If the medium is speech, then noise is literally any random sound, such as traffic, pneumatic drills, or the buzz of a crowd, which drowns it out. With the advent of more complex forms of communication, the term has become more general, however. The problem of noise interfering with communication can be minimised by information redundancy. In its simplest form this is repetition. It can also mean retransmission in an alternative form, or via another channel or medium, or some way for the recipient to check that the information has not degraded. Natural language itself contains much redundancy. Grammatical rules mean that it is still possible to decode a sentence even when words and letters are missing. For example, “I … happy that George l?kes t?e bisc??ts”.
  4. Our senses are fallible, and it is possible to misunderstand what is expressed in a medium of communication, e.g., by mishearing or misreading it.
  5. Our information processing abilities can also become overloaded. The principle of requisite parsimony means that there are limits to the rate at which we can decode information. However, the principle of requisite saliency says we can deal with this limitation by prioritising the information we do receive, and process only what seems to be the most important.
  6. In memorising information, the principle of effort after meaning plays an important part. When attempting to store new information in memory, we often modify it so that it is consistent with what we already know.
  7. Finally, memories fade if not constantly accessed, and even when they are accessed, this can result in them being modified.

Given all these factors, errors in human communication are inevitable. Indeed, it may seem surprising that we are able to communicate at all. Perhaps information redundancy is the reason.

d. Principles of Self-Maintaining Systems

Principles of Self-Maintaining Systems

Some systems, known as self-maintaining systems, are thought to have both maintenance sub-systems and adaptive mechanisms. The maintenance sub-system sustains the relationship between the other sub-systems and holds the entire system together. The adaptive mechanisms promote changes to inputs, outputs, and processes, to keep the system in equilibrium with its environment. Living things, for example, are self-maintaining, but not exclusively so. People also create self-maintaining machines, computer programmes, etc.

Self-maintenance and adaptation are carried out through a process of feedback. Information on inputs, processes and outputs are passed to the controlling sub-system. The latter then processes this information, and issues commands, again in the form of information, to subsystems engaged in accepting inputs, in processing them, and in delivering outputs. For control to be successful important aspects of the latter must appear as a white box to the former, i.e., must be known by it. This existence of controlling and subordinate systems is known as requisite hierarchy.

Another principle, requisite variety, applies to the operation of controlling sub-systems. This principle was discovered by W. Ross Ashby and is also known as the First Law of Cybernetics. It holds that the degree of control of a system is proportional to the amount of information available. Variety refers to the number of states of a system. If a controlling sub-system can recognise all possible states, then it has full knowledge of the systems behaviour and can therefore issue appropriate instructions. If it does not have knowledge of all possible states, uncertainty arises. Ashby believed that “When the variety or complexity of the environment exceeds the capacity of a system (natural or artificial) the environment will dominate and ultimately destroy that system.”

Such systems are known as self-maintaining systems because they perform these operations autonomously and without any assistance from their environment. However, they can use a large part of their inputs in self-maintenance as opposed to producing outputs. The boundaries of systems which are not self-maintaining are defined by the observer. However, self-maintaining systems define their own boundaries. In a living system, such as a bacterial cell or a multicellular organism, this property is known as autopoiesis.

Systems with higher levels of organisation can display purposive behaviour or agency. That is, they have choices available to them, and produce an end result after a period of time. Systems with purposive behaviour can “extract” inputs from other systems in their environment, or can “exchange” them for their own outputs. Without adaptation, a system can become unsustainable. It may, for example extract inputs at a rate greater than its environment can produce them, or it may produce outputs at a rate greater than its environment can process them.

The internal organisation of a system can increase in complexity without being guided or managed by an outside source. This is known as self-organisation and relies on four main ingredients. They are: positive and negative feedback; a balance between the exploitation of existing opportunities and the exploration of new ones; and multiple interactions. The latter are not merely one-way causal inputs, but also, two-way output/input relationships with other systems in the environment. More information on self-organisation can be found here:

The reliability of a system can be increased through redundancy. That is, the duplication of critical components. One important redundancy is known as redundancy of potential command. This principle was first identified by the American neurophysiologist, Warren McCulloch, in the 1950s. When studying the transmission of signals between the brain and the nervous system, it was found that two identical signals from the same source were being delivered by a primary channel and an auxiliary channel. From this, McCulloch developed the principle that knowledge, i.e., correct information, constitutes authority. This will be explored further in my next post.

To these principles, I would add the two variational principles described in my earlier article on decision making and behaviour. That is, pressing needs and the efficient use of resources. A self-maintaining system may have several functions but limited resources. So, it is necessary to prioritise its processes and attend to the most pressing needs first. It must also employ its resources as efficiently as possible to maximise the benefits of its processes and outputs.  Together, these variational principles help to maintain the system and contribute to the likelihood of its continued existence.

c. Further Principles of General Systems Theory

Further Principles of General Systems Theory

I will describe General Systems Theory in more detail in the next few articles, and then provide a systems based model which can be used to understand human society, how it works, and why it sometimes fails. This model uses the principles described below.

Near Decomposability. Many natural and artificial systems are structured hierarchically, and their components can be seen as occupying levels. At the highest level is the system in its entirety. Its components occupy lower levels. As we move down through the levels we encounter ever more, smaller, and less complex components. The rates of interaction between components at one level tend to be quicker than those at the level above. The most obvious example of this is the speed with which people make decisions. An individual can make decisions relatively quickly, but the rate steadily slows as we move up the hierarchy through small groups, organisations, and nations, to global society.

Sub-optimisation. This principle recognises that a focus on optimising the performance of one component of a system can lead to greater inefficiency in the system as a whole. Rather the whole system must be optimised if it is to perform at maximum efficiency. Its components must sometimes operate sub-optimally.

Darkness. This principle states that no system can be known completely. The best representation of a complex system is the system itself. Any other representation will contain errors. Thus, the components of a system only react to the inputs they receive, and cannot “know” the behaviour of the system as a whole. For the latter to be possible then the complexity of the whole system would need to be present in the component. The expression “black box” is used to describe a system or component whose internal processes are unknown, and “white box” to describe one whose internal processes are known. Most systems are, of course, “grey boxes”.

An interesting question arises from the principles of near composability and darkness. As explained in previous articles, human beings are motivated by needs and contra-needs. The question is, of course, whether groups of individuals, species, and ecosystems also have needs and contra-needs which differ from their individual members. Are reduced birth rates, for example, a natural species response to population pressures? If so, then near decomposability implies that, because groups, species, and ecosystems are more complex systems than single individuals, the processes which satisfy those needs will proceed more slowly. Darkness implies that as individuals we would be unable to “know” the processes involved, although as a society we might.

Equifinality. The processes in a system can, but do not necessarily, have an equilibrium point, i.e., a point at which entropy is at a minimum, and at which the system normally operates. If, for any reason, the processes are displaced from it, then they will subsequently alter to approach that point once more. This characteristic is known as homeostasis. Thus, a given end state can be reached from many initial states, a feature known as equifinality. For example, if a child’s swing is displaced from the vertical and released, then, after swinging to and fro for a while, it will eventually return to the vertical.

Multifinality. It is possible for the processes in a system to have more than one stable point. If a process is displaced a little from one of them, it may ultimately return. However, if it is displaced too far, then it may subsequently approach another equilibrium point. This is a feature of natural ecosystems. If they are damaged in some way, they will ultimately return to a stable state. However, this state will often differ from the earlier, damaged, original.

Dynamic Equilibrium. This principle is like that of equifinality but applies to rates of change in systems. Some systems are dynamic and have a stable rate of change. If displaced from that rate of change for any reason, they will ultimately return to it. This is known as homeorhesis, a term derived from the Greek for “similar flow”. Again, a dynamic system may have several stable rates of change.

Relaxation Time. Relaxation means the return of a disturbed system to equilibrium. The time it takes to do so is known as the relaxation time.

Circular Causality or Feedback. Feedback occurs when the outputs of a system are routed back as inputs, either directly or via other systems. Thus, a chain of cause and effect is created in the form of a circuit or loop. The American psychologist Karl Weick explained the operation of systems in terms of positive and negative feedback loops. Systems can change autonomously between stable and unstable states depending on the dominant form of feedback. Feedback is, therefore, the basis of self-maintaining systems which will be discussed in the next article.


The Mathematics of Language and Thought

A copy of my recently published book, “The Mathematics of Language and Thought”, is now available for free download on this website. Click here or go to “Menu Options”, “My Books” and click on the links.

The topic covered is mathematical logic. The book describes a new and innovative system which is axiom based, can be manipulated in a similar way to algebra, and which unites the various conventional logics, mathematics, and natural language using a single form of symbolism. Furthermore, it improves significantly on conventional tense and epistemic logics. It also replicates causality and natural human reasoning, which is, of course, probabilistic.

I have had the advantage of access to a word processor. Nevertheless, this work has taken 23 years to complete. I have the greatest admiration therefore for earlier innovators, such as Cantor, Frege and Russell, who had nothing but a fountain pen or a quill and inkpot to work with. Clearly, it was impossible for them to investigate the subject as deeply as they must have wished. What might they have achieved with present day technology?

b. General Systems Theory

General Systems Theory

In this article, I will describe a branch of science known as General Systems Theory. I will do so because it provides an extremely powerful set of tools for understanding human nature and society.

The aim of General Systems Theory is to provide an overarching theory of organisation which can be applied to any field of study. It aims to identify broadly applicable concepts rather than those which apply only to one field. It can, therefore, apply in the fields of mathematics, engineering, chemistry, biology, the social sciences, ecology, etc. One of the principal founders of General Systems Theory was the Austrian biologist Ludwig von Bertalanffy (1901 – 1972), although there have been many other contributors. To date, its principal application has been in the popular fields of business, the environment, and psychology, but it is equally applicable to human nature and society.

A system comprises a collection of inter-related components, with a clearly defined boundary, which work together to achieve common objectives. Within this boundary lies the system, and outside lies its environment. Systems are described as being either open or closed. In the case of a closed system, nothing can enter it from, or leave it to, the environment. It is a hypothetical concept, therefore. In reality, all systems are open systems comprising inputs, processes and outputs to the environment. In a closed system, the 2nd Law of Thermodynamics applies, entropy will steadily increase, and the system will fall into disorder. However, in an open system, it is possible to resist decay, or even to reverse it and increase order.

In summary, an open system comprises inputs, processes, and outputs. In the case of an individual human being, our inputs are satisfiers and contra-satisfiers, our processes comprise our needs, contra-needs and decision-making, and our outputs are our behaviour.

The basis of General Systems Theory is causality. Everything we regard as being a cause or effect comprises components, which can also be regarded as causes and effects. Ultimately, causality has its foundation in particle physics, therefore. Furthermore, every cause or effect is a component of yet greater causes and effects, up to the scale of the universe in its entirety. Similarly, General Systems Theory regards everything from the smallest particle to the entire universe as a system. Thus, every system comprises components which are also systems, and every system is a component of yet greater systems. A system, a cause, and an effect are all one and the same thing, therefore.

In causality, events of one type cause events of another type by passing matter, energy or information to them. These are the equivalent of the inputs and outputs of a system. As Einstein explained, matter is organised energy. Information is also conveyed in the way that matter or energy are organised. So, causality is the transfer of energy, in an organised or disorganised form, from one system to another. This transfer can be regarded as an output from the cause, and an input to the effect. Causes and effects form chains or loops, and so create recurring, and thus, recognisable patterns of energy flow. It is such recognisable patterns that enable us to understand and predict the world in which we live, and which are of interest to General Systems Theory.

Causes can, of course, be necessary or sufficient. For a system or system component to carry out its function, several inputs from the environment or other components may be necessary. Only together may they be sufficient for the system to function. Furthermore, inhibitors also have a part to play in preventing effects on processes. Thus, the relationships between a system and its environment, and the relationships between the components of a system can be complex and chaotic.

A feature of systems is that they often display emergent properties. These are characteristics that the component parts of a system do not have, but which, by virtue of these parts acting together, the system does have. In other words, “the whole is more than the sum of its parts”. This concept dates to at least the time of Aristotle. The classic example is consciousness. A human being experiences consciousness, but his or her component cells do not. Similarly, systems also display vanishing properties. These are properties that a system does not have, but which its component parts do. For example, individual human beings may be compassionate but an organisation comprising such people may not. Emergent and vanishing properties are thought to be related to the way that energy is organized and flows in a system. They are recognizable patterns of energy flow.

Continuum changes of state occur when a variable characteristic of something alters. For example, when a child puts on weight or grows in height. System complexity is one such variable characteristic. Changes in a variable characteristic can be imperceptible in the short term but aggregate over time until we can perceive them. For example, in the longer term, a person can change his or her state from that of being a child to that of being an adult, but the changes which occur in a week are imperceptible. Emergent and vanishing properties are thought to be continuum changes of state which occur as the complexity of systems grow. They can be identified by comparing things that are similar, but either more or less complex than one another, e.g., a chimpanzee and a human being.

We tend to think of systems as falling into categories which are organised hierarchically, e.g., the popular categories:  animal, vegetable, and mineral. The best way of categorising the levels in a hierarchy of systems is via emergent properties. This is because with new properties, new rules also emerge. One emergent property of particular importance is self-maintenance. This appears in life, beginning with replicative molecules and moving up through viruses, bacteria, and multi-cellular organisms, to ourselves. This self-maintenance property is the same as life’s struggle to maintain its integrity in the face of entropy.

Self-maintaining systems are characterised by two types of feedback loop. One is internal and the other external. The internal feedback loop is known in systems theory as the command feedback loop. It gathers information from within the system and modifies its operation. The external feedback loops are particularly relevant to human society. They comprise the system interacting with its environment, through its outputs, to create circumstances conducive to the supply of its necessary inputs. The goal of both is, of course, to ensure the continued survival of the system in changing circumstances.

Individual human beings, organisations, and societies can be regarded as systems. So too can the natural environment in which we live, for example, the weather and natural ecosystems. However, their behaviour can be chaotic rather than deterministic. We can predict them to a limited extent, but the probability of any prediction proving correct diminishes as distance into the future increases.

a. Causality in More Detail

Causality in More Detail

We take it for granted that the universe operates according to the laws of causality. People may disagree on what causes a particular effect, but there is no disagreement on the existence of causality. This is universally accepted. But what is causality? In this and the next few articles I will attempt to explain.

We are well used to thinking in terms of causality, which we understand to mean a cause leading to an effect. However, this apparently simple concept contains much complexity. Firstly, we do not always use the word “cause” when describing causality. For example, rather than saying that a factory causes cars, we say that a factory manufactures them.

Secondly, we normally regard an effect as being the beginning of an event, object, or circumstance. However, it can also be the end, a change of state, or the ongoing event, object, or circumstance in its entirety. Thus, we refer to one event (the cause) as causing another (the effect) to begin, end, alter in state, or be ongoing in its entirety.

Thirdly, although the names cause and effect are singular, both are, in fact, plural collections of events, objects or circumstances of a particular type. Any single member is known as an instance of the cause or effect.

Causality describes the ways in which instances of these two collections can match. The Scottish philosopher David Hume observed that for a causal relationship to exist:

  1. an instance of the effect must always begin after an instance of the cause; and
  2. the instances of the effect and cause must be contiguous in space.

In other words, for a causal relationship to exist, the region of space-time occupied by an instance of the cause must contain the region of space-time occupied by an instance of the effect. The region of space-time occupied by something is the space occupied by it at every point in time during its existence.

Causal rules are derived from the way in which individual pairings of the instances are repeated. Two sets of events are described as being causally related if one of the following conditions apply.

  1. If an instance of the cause is sufficient for an instance of the effect, then the region of space-time occupied by the former always contains the region of space-time occupied by the latter. Fig.1 shows this diagrammatically. In other words, an instance of the effect always takes place in the presence of an instance of the cause. However, it is not necessarily the case that every instance of the effect results from an instance of the cause.
  2. If an instance of the cause is necessary for an instance of the effect, the region of space-time occupied by the latter is always contained by the region of space-time occupied by the former. Fig.2 shows this diagrammatically. In other words, an instance of the effect cannot take place in the absence of an instance of the cause. However, it is not necessarily the case that every instance of the cause leads to an instance of the effect.
Fig.1 A space-time diagram showing instances of a sufficient cause as white ellipses, and instances of the effect as black lines at the beginning of events shown by grey ellipses.
Fig. 2 A space-time diagram showing instances of a necessary cause as white ellipses, and instances of the effect as black lines at the beginning of events shown by grey ellipses.

If an event of a particular type occurs, then these causal rules allow us to deduce, with varying degrees of certainty, what causes have taken place or what effects will take place.

Causality can be complex, with several causes combining to produce an effect. The epidemiologist, Ken Rothman, explained that, for an effect to take place, it is often the case that several necessary causes must combine to create a sufficient cause. The combination of necessary causes of type A, B and C may be sufficient to result in an effect of type D. For example, the presence of gas, oxygen and a spark are each necessary and together sufficient to cause a gas explosion. Fig.3 shows this diagrammatically.

Fig.3 A space-time diagram showing instances of three necessary causes as coloured ellipses, which together comprise sufficient cause, and instances of the effect as black lines at the beginning of events shown by grey ellipses.

One aspect of causality which is often overlooked is the existence of inhibitors. In the same way as a cause and an effect, an inhibitor is a plural collection of physical events, objects, or circumstances of a particular type. However, it is the opposite of a cause in that it prevents an effect from taking place. Depending on its type, the presence of an instance of the inhibitor can prevent an event from beginning, ending, changing state, or occurring in its entirety, irrespective of any causes which might dictate otherwise.

In the same way as causes, inhibitors can be necessary to prevent an event or sufficient to do so. If an inhibitor is necessary but not present, then the effect can occur. However, this does not necessarily mean that it will occur. This depends on what causes are present. On the other hand, if an inhibitor is sufficient and present, then the effect cannot occur. In practice, a sufficient inhibitor can be a combination of several necessary inhibitors. The region of space-time in which the effect is prevented is the overlap between them.

Causality is, of course, a physical process. This process will be described in more detail in the next article.

o. Regret


Before moving on from decision making, I would like to say something about regret. We all experience regret over decisions we have made or failed to make. “I wish I had done this”, “I shouldn’t have done that”, “If only I had done something else instead…” and so on. This is especially the case when an opportunity seems to have been missed or a risk was not avoided.

We should admit to mistakes because this enables us to correct them or mitigate their impact. However, there are several reasons for not feeling the emotion of regret.

  1. The most obvious one is, of course, that what is done cannot be undone. The past cannot be changed. We can only act in the present and the future to mitigate the effect of any seemingly poor decisions.
  2. Decisions often have multiple outcomes, some of which are positive and others negative. In a chaotic world, these outcomes can rarely be predicted. So, although an alternative decision may have yielded the benefit we desire, it may also have yielded unanticipated disbenefits. Furthermore, the latter might outweigh the former.
  3. Most people have an optimism bias. This leads us to believe that we are more likely to be successful and less likely to suffer misfortune than reality would suggest. So, when we miss an opportunity or suffer a risk, we tend to believe, often incorrectly, that an alternative decision would have avoided this.
  4. In reality, the future is probabilistic. After an initial decision if we wish to achieve the desired outcome, then we often must make ongoing adjustments in the face of the unexpected. In practice, we often manage our way to desired outcomes over a period of time.
  5. Focusing on what might have been uses mental resources. There are benefits to be had in learning from “mistakes”. However, there is also a danger that, if we focus on them too much, we will suffer depression, neglect future decisions, or begin to lack the confidence to make them.

I recommend the novel, “The Midnight Library” by Matt Haig, which illustrates this beautifully.

In conclusion, life should be lived as it is, and not as it might have been. However, we must remain at the steering wheel and make constant adjustments if we want it to take the direction we would wish.

When one door closes another door opens; but we so often look so long and so regretfully upon the closed door, that we do not see the ones which open for us.”

Alexander Graham Bell
n. Maintaining Independence of Mind

Maintaining Independence of Mind

To maintain our independence of mind it is necessary to avoid unconscious beliefs and attitudes that we would prefer not to have. Suggestions as to how we might do so are listed below.

  • Question the motives of charismatic leaders and role models.
  • Avoid following authoritarian leaders or being managed by authoritarian managers. They will insist that we adopt their point of view if we wish to remain in the group that they lead. Inclusive leaders and managers, on the other hand, respect, and value independence of mind.
  • Avoid following populist leaders. They will often place the blame for any difficult circumstances we find ourselves in on an “outgroup” rather than address the true reasons.
  • Avoid ideologies. If we need to join a group to socialize, then we should join one whose members have a wide range of views rather than a particular ideology. This can be checked by adding “ism” to words in a group’s name.
  • Practice awareness of our own emotions and those of others with whom we interact. Emotional contagion and emotional carry-over from previous decisions can both affect our current decisions. Furthermore, our emotions can be deliberately manipulated by others to achieve their desired ends.
  • Our conscious skills can be strengthened by practicing highly focused mental and, possibly, physical activities, e.g., a personal project or Sudoku puzzles.
  • Develop a clear personal ethic and set of values. It may need to evolve over time as circumstances alter it, but that is normal.
  • Consciously rehearsing our ethics and values can strengthen them. A strongly held ethic makes it more difficult for contradictory unconscious beliefs and attitudes to gain a foothold.
  • Acquaint ourselves with the verifiable facts around an issue before making decisions associated with it.
  • Consciously criticise our decisions, especially apparently spontaneous ones. Judge them against our personal ethic and values. If necessary, veto them and think again.
  • Avoid watching unsolicited advertising. For example, watch advertisement free channels or mute the TV when they are on. Cover the advertisements on the back of seats of buses and aircraft. If we need something we can search for it on the internet or consult a shopkeeper.
  • It is particularly important to avoid watching the same advert repetitively. In the UK it is illegal for an ad. to repeat the same message more than three times as this subliminally reinforces it. So how do advertisers get around this? By frequently repeating their ad.
  • Lobby government for greater controls over advertising. It should be factual, unintrusive, not personally targeted, not excessively repetitive, and not imply that the product has false benefits.
m. Perspectivism and Poly-perspectivism

Perspectivism and Poly-perspectivism

No-one has the mental capacity to fully understand the world. Each of us is only capable of a partial understanding. This concept is known as perspectivism. It is possible, however, to expand and improve our worldview through interaction with those of others. This is known as poly-perspectivism. To give an analogy, when we look at a statue, we see only one side or perspective. Two people at diametrically opposite positions see entirely different perspectives. However, each is a part of the truth. Walking around the statue enables us to see all perspectives and, thus, the whole truth. Individually, we lack the mental capacity to do this for the whole of reality, of course, but it can be done for relatively limited topics.

Poly-perspectivism means understanding other perspectives. It does not mean abandoning our own, but rather building on it and correcting it where necessary. Unfortunately, each worldview is partially true and partially false. The proportion varies from individual to individual, and from worldview to worldview. Thus, other perspectives will almost certainly include beliefs which are objectively false. Furthermore, beliefs can deliberately be falsified in the interest of their proponents. This means that the techniques for identifying truth, described in my previous article, must be used when considering other perspectives.

Advice on how to engage with other perspectives is given in Paul Graham’s hierarchy of disagreement here and, diagrammatically, here. As a rule, the lower a person’s behaviour is on Graham’s Hierarchy of Disagreement, the more defensive they are of their worldview.

One major advantage of poly-perspectivism is associated with “holism”. This term was coined by the South African statesman, Jan Smuts, in 1926, and means that the whole is more than the sum of its parts. Holism is another way of describing emergent properties, i.e., properties which are not held by the individual parts of a system, but only by the system acting together as a whole. Our personal perspective may enable us to see part of what emerges from the whole, but it is unlikely that we will see all of it, or understand how and why it emerges. However, the more we adopt truths from other perspectives, the more we can:

  1. see the relevant topic as a whole;
  2. see errors in our own perspective of it;
  3. see fully what emerges from it; and
  4. understand how and why those things emerge.