The “Extended Framework for a General Systems Theory” (EFGST) builds upon the original “Framework for a General Systems Theory” that I first released several months ago. The original framework provided a structured, cross-disciplinary approach to understanding systems, their properties, and the causal processes that drive them.
This new Extended Framework expands this foundation with several key advances that connect systems theory more tightly to physical and informational dynamics:
Seeds and Contra-Seeds describe how systems can be triggered to develop or decay through reinforcing or opposing influences.
Mobus’s Concept of Systemness, Integrated with the notion of state spaces, helps describe how systems maintain coherence, evolve, and approach attractors.
Troncale’s Linkage Propositions are reinterpreted within EFGST as causal-probabilistic connections that can potentially be used to map probabilities across configuration and state spaces, providing a degree of predictability.
Recomposition provides a new explanatory model for how complex systems build upon rather than replace their components, clarifying how emergence arises in distinct levels.
Together, these extensions bring the framework closer to a unified theory of system dynamics, applicable from physics and biology to social and cognitive systems.
You can read the overview paper and explore the detailed set of definitions and propositions here:
These materials form the foundation for ongoing work towards an integrated General Systems Theory; one that connects the causal, energetic, and informational dimensions of system behaviour.
Systems theory, causality, natural language, and logic have traditionally been pursued as separate disciplines. However, underlying each of these domains are fundamental structures that suggest a deeper, unified framework. The way we structure our understanding of these disciplines is not arbitrary. Rather, it is dictated by principles that govern perception and cognition. It may also be dictated by principles that govern reality.
The Unified Universal Disciplines Hypothesis (UUDH) proposed in this paper posits that Fundamental systems theory, causality, natural language, and logic are different manifestations of the same underlying structure in the way that human beings perceive reality and reason. Each of these domains encodes and processes causal interactions in ways that reflect the level of complexity and perspective employed by the observer.
This paper presents the argument and describes the methodology for unifying these disciplines into a cohesive model that enables more precise reasoning across them. Symbolic Reasoning, an enhancement of traditional set theory, provides a formal tool to facilitate this unification.
UUDH has considerable and diverse explanatory power from quantum theory to human society. The unification of systems, causality, natural language, and logic represents a promising approach to developing a more comprehensive understanding of human cognition and external reality. By integrating these traditionally separate fields, we can enhance our ability to reason about complex systems in a coherent and structured manner. Symbolic Reasoning offers a powerful tool for this integration. However, the approach is hypothetical, and empirical testing is needed to verify it.
The paper presents a comprehensive hypothesis that seeks to explain the nature of reality and how humans understand it, integrating foundational concepts from critical realism, systems theory, and causality. The hypothesis holds that reality can be viewed as a fractal-like structure, generated by underlying organising principles that operate at various ranks in a hierarchy. Starting from acausal foundational principles, the paper explores how systems interact, transfer matter, energy, and information, and contribute to the complexity observed at different levels of organisation. The hypothesis extends to the idea that human understanding is structured by organising principles that differ from reality’s, leading to distinct layers of comprehension reflected in scientific disciplines. The paper suggests that integrating these principles may help bridge gaps between disciplines, such as the disconnect between social sciences and the biological sciences. This unification has the potential to deepen our understanding of both the natural world and human social behaviour, while identifying new pathways for societal change.
It was originally published in January, 2023, has been updated to include observations from:
“A Conceptual Framework for General System Theory”, John A. Challoner, March, 2024.
“Different Interpretations of Systems Terms” sent to the Research towards a General Systems Theory SIG of the International Society for the Systems Sciences’ in April, 2024.
“The Mathematics of Language and Thought”, John A. Challoner, 2021.
The paper discusses systems theory from a cognitive and physicalist perspective. The cognitive perspective holds that we are our minds and cannot escape the constraints imposed by their biology and evolutionary history. Nevertheless, human cognition is a reasonably accurate representation of reality. Physicalism holds that space-time comprises the whole of reality and that everything, including abstract concepts and information, exists within it.
From this perspective, conceptual and theoretical frameworks for systems theory are proposed and described. Concepts include: the importance of structure; the nature of relationships, causality, and physical laws; and the significance of recursion, hierarchy, holism, and emergence. Human cognitive factors are also discussed, including: their limitations; the nature of information and language; and the search for knowledge in a world of complexity and apparent disorder.
The paper includes the implications of this perspective for General System Theory and Social Systems Theory, suggesting further work to advance those disciplines.
I will describe General Systems Theory in more detail in the next few articles, and then provide a systems based model which can be used to understand human society, how it works, and why it sometimes fails. This model uses the principles described below.
Near Decomposability. Many natural and artificial systems are structured hierarchically, and their components can be seen as occupying levels. At the highest level is the system in its entirety. Its components occupy lower levels. As we move down through the levels we encounter ever more, smaller, and less complex components. The rates of interaction between components at one level tend to be quicker than those at the level above. The most obvious example of this is the speed with which people make decisions. An individual can make decisions relatively quickly, but the rate steadily slows as we move up the hierarchy through small groups, organisations, and nations, to global society.
Sub-optimisation. This principle recognises that a focus on optimising the performance of one component of a system can lead to greater inefficiency in the system as a whole. Rather the whole system must be optimised if it is to perform at maximum efficiency. Its components must sometimes operate sub-optimally.
Darkness. This principle states that no system can be known completely. The best representation of a complex system is the system itself. Any other representation will contain errors. Thus, the components of a system only react to the inputs they receive, and cannot “know” the behaviour of the system as a whole. For the latter to be possible then the complexity of the whole system would need to be present in the component. The expression “black box” is used to describe a system or component whose internal processes are unknown, and “white box” to describe one whose internal processes are known. Most systems are, of course, “grey boxes”.
An interesting question arises from the principles of near composability and darkness. As explained in previous articles, human beings are motivated by needs and contra-needs. The question is, of course, whether groups of individuals, species, and ecosystems also have needs and contra-needs which differ from their individual members. Are reduced birth rates, for example, a natural species response to population pressures? If so, then near decomposability implies that, because groups, species, and ecosystems are more complex systems than single individuals, the processes which satisfy those needs will proceed more slowly. Darkness implies that as individuals we would be unable to “know” the processes involved, although as a society we might.
Equifinality. The processes in a system can, but do not necessarily, have an equilibrium point, i.e., a point at which the system normally operates. If, for any reason, the processes are displaced from it, then they will subsequently alter to approach that point once more. This characteristic is known as homeostasis. Thus, a given end state can be reached from many initial states, a feature known as equifinality. For example, if a child’s swing is displaced from the vertical and released, then, after swinging to and fro for a while, it will eventually return to the vertical.
Multifinality. It is possible for the processes in a system to have more than one stable point. If a process is displaced a little from one of them, it may ultimately return. However, if it is displaced too far, then it may subsequently approach another equilibrium point. This is a feature of natural ecosystems. If they are damaged in some way, they will ultimately return to a stable state. However, this state will often differ from the earlier, damaged, original.
Dynamic Equilibrium. This principle is like that of equifinality but applies to rates of change in systems. Some systems are dynamic and have a stable rate of change. If displaced from that rate of change for any reason, they will ultimately return to it. This is known as homeorhesis, a term derived from the Greek for “similar flow”. Again, a dynamic system may have several stable rates of change.
Relaxation Time. Relaxation means the return of a disturbed system to equilibrium. The time it takes to do so is known as the relaxation time.
Circular Causality or Feedback. Feedback occurs when the outputs of a system are routed back as inputs, either directly or via other systems. Thus, a chain of cause and effect is created in the form of a circuit or loop. The American psychologist Karl Weick explained the operation of systems in terms of positive and negative feedback loops. Systems can change autonomously between stable and unstable states depending on the dominant form of feedback. Feedback is, therefore, the basis of self-maintaining systems which will be discussed in the next article.
In this article, I will describe a branch of science known as General Systems Theory. I will do so because it provides an extremely powerful set of tools for understanding human nature and society.
The aim of General Systems Theory is to provide an overarching theory of organisation which can be applied to any field of study. It aims to identify broadly applicable concepts rather than those which apply only to one field. It can, therefore, apply in the fields of mathematics, engineering, chemistry, biology, the social sciences, ecology, etc. One of the principal founders of General Systems Theory was the Austrian biologist Ludwig von Bertalanffy (1901 – 1972), although there have been many other contributors. To date, its principal application has been in the popular fields of business, the environment, and psychology, but it is equally applicable to human nature and society.
A system comprises a collection of inter-related components, with a clearly defined boundary, which work together to achieve common objectives. Within this boundary lies the system, and outside lies its environment. Systems are described as being either open or closed. In the case of a closed system, nothing can enter it from, or leave it to, the environment. It is a hypothetical concept, therefore. In reality, all systems are open systems comprising inputs, processes and outputs to the environment. In a closed system, the 2nd Law of Thermodynamics applies, entropy will steadily increase, and the system will fall into disorder. However, in an open system, it is possible to resist decay, or even to reverse it and increase order.
In summary, an open system comprises inputs, processes, and outputs. In the case of an individual human being, our inputs are satisfiers and contra-satisfiers, our processes comprise our needs, contra-needs and decision-making, and our outputs are our behaviour.
The basis of General Systems Theory is causality. Everything we regard as being a cause or effect comprises components, which can also be regarded as causes and effects. Ultimately, causality has its foundation in particle physics, therefore. Furthermore, every cause or effect is a component of yet greater causes and effects, up to the scale of the universe in its entirety. Similarly, General Systems Theory regards everything from the smallest particle to the entire universe as a system. Thus, every system comprises components which are also systems, and every system is a component of yet greater systems. A system, a cause, and an effect are all one and the same thing, therefore.
In causality, events of one type cause events of another type by passing matter, energy or information to them. These are the equivalent of the inputs and outputs of a system. As Einstein explained, matter is organised energy. Information is also conveyed in the way that matter or energy are organised. So, causality is the transfer of energy, in an organised or disorganised form, from one system to another. This transfer can be regarded as an output from the cause, and an input to the effect. Causes and effects form chains or loops, and so create recurring, and thus, recognisable patterns of energy flow. It is such recognisable patterns that enable us to understand and predict the world in which we live, and which are of interest to General Systems Theory.
Causes can, of course, be necessary or sufficient. For a system or system component to carry out its function, several inputs from the environment or other components may be necessary. Only together may they be sufficient for the system to function. Furthermore, inhibitors also have a part to play in preventing effects on processes. Thus, the relationships between a system and its environment, and the relationships between the components of a system can be complex and chaotic.
A feature of systems is that they often display emergent properties. These are characteristics that the component parts of a system do not have, but which, by virtue of these parts acting together, the system does have. In other words, “the whole is more than the sum of its parts”. This concept dates to at least the time of Aristotle. The classic example is consciousness. A human being experiences consciousness, but his or her component cells do not. Similarly, systems also display vanishing properties. These are properties that a system does not have, but which its component parts do. For example, individual human beings may be compassionate but an organisation comprising such people may not. Emergent and vanishing properties are thought to be related to the way that energy is organized and flows in a system. They are recognizable patterns of energy flow.
Continuum changes of state occur when a variable characteristic of something alters. For example, when a child puts on weight or grows in height. System complexity is one such variable characteristic. Changes in a variable characteristic can be imperceptible in the short term but aggregate over time until we can perceive them. For example, in the longer term, a person can change his or her state from that of being a child to that of being an adult, but the changes which occur in a week are imperceptible. Emergent and vanishing properties are thought to be continuum changes of state which occur as the complexity of systems grow. They can be identified by comparing things that are similar, but either more or less complex than one another, e.g., a chimpanzee and a human being.
We tend to think of systems as falling into categories which are organised hierarchically, e.g., the popular categories: animal, vegetable, and mineral. The best way of categorising the levels in a hierarchy of systems is via emergent properties. This is because with new properties, new rules also emerge. One emergent property of particular importance is self-maintenance. This appears in life, beginning with replicative molecules and moving up through viruses, bacteria, and multi-cellular organisms, to ourselves. This self-maintenance property is the same as life’s struggle to maintain its integrity in the face of entropy.
Self-maintaining systems are characterised by two types of feedback loop. One is internal and the other external. The internal feedback loop is known in systems theory as the command feedback loop. It gathers information from within the system and modifies its operation. The external feedback loops are particularly relevant to human society. They comprise the system interacting with its environment, through its outputs, to create circumstances conducive to the supply of its necessary inputs. The goal of both is, of course, to ensure the continued survival of the system in changing circumstances.
Individual human beings, organisations, and societies can be regarded as systems. So too can the natural environment in which we live, for example, the weather and natural ecosystems. However, their behaviour can be chaotic rather than deterministic. We can predict them to a limited extent, but the probability of any prediction proving correct diminishes as distance into the future increases.