One of the most intriguing features of living and social systems is that tiny actions often have enormous effects. A neuron fires and a limb moves. A policy is announced and an institution reorganises itself. A symbolic message spreads and a social movement begins.
Why does this happen?
A growing line of work, including a recent paper I’ve written, suggests that agency operates through a mechanism we might call causal leverage. In simple terms:
information, even when coupled with very little energy, can unlock or redirect far larger flows of energy elsewhere.
This idea bridges physics, biology, cognition, and social behaviour. It explains why:
control systems use small signals to regulate large processes,
communication changes minds with minimal physical effort,
leaders and institutions wield influence through words more than force,
and why humans naturally seek positions of “power”; because it increases the amplification of their actions.
Rather than treating agency and social power as abstract concepts, this approach roots them in the physical world. The full paper explores these ideas in more detail for those who are interested and can be downloaded in pdf format at https://rational-understanding.com/SST
We often speak of “information” as though it floats freely in cyberspace or the human mind, detached from anything physical. Yet every bit of information, from the letters on this page to the thoughts in your head, is carried by matter or energy. This simple observation lies at the heart of cognitive physicalism, the view that cognition, communication, and social coordination are all thermodynamic processes.
Information Is Order
In physical terms, information is negative entropy; order among components of a system. When the atoms of a crystal, the base pairs of DNA, or the neurons of a brain are arranged in regular patterns, they hold information by reducing randomness. This definition, first clarified by Léon Brillouin and Erwin Schrödinger, gives information the same physical dimensions as entropy:
Energy provides the capacity for work (); information provides the form that directs that work. Together they make organisation possible.
How Physics Becomes Mind
In purely physical systems, energy and entropy simply flow. With life, informational structures emerge that regulate those flows. A cell maintains order by channelling chemical energy through genetic and enzymatic constraints. With evolution, feedback control grows more elaborate: nervous systems model the world, predict outcomes, and choose among options. Agency, the ability to act purposefully, appears when informational form controls energetic process.
At higher levels, the same principle produces cognition, language, and society. Neural firing, conversation, and economic exchange are all manifestations of energy flows organised by information.
Why Equations Matter
When information theory borrowed from thermodynamics, it kept Boltzmann’s equation but quietly normalised away the constant Doing so made information appear dimensionless; handy for communication engineers, but misleading for science. As Rolf Landauer later reminded us, information is physical: erasing a single bit requires energy and generates heat. Ignoring this fact masks the cost of learning, computing, and communicating; costs that become crucial when we extend systems thinking to living and social domains.
The Structure of Agency
Agency can be described in three physical layers:
Level
Description
Dimensions
Agentic information structure
pattern that directs energy
Agentic potential
information-structured energy capacity
Actualised agency
directed energy flow through time
Energy provides the means, information the form, and their coupling the act. Whether in a cell, a mind, or a society, the same dimensional hierarchy holds.
The Sun and the Spectrum of Agency
All terrestrial agency begins with the Sun. Photons striking chlorophyll are converted into chemical potential, which sustains metabolism, cognition, and eventually culture. Every thought, conversation, or social reform is therefore a distant echo of solar radiation; a transformation of sunlight into structured work.
The Cost of Thought and Change
Learning, decision, and communication are thermodynamic operations. Brain imaging shows energy consumption rising during problem-solving; each new memory reduces neural entropy while producing waste heat. The same principle scales up: cultural and institutional change require energy to reorganise shared information. Schools, media, and political movements are energetic engines for lowering societal entropy. When their energy supply falters, coherence and collective agency decline.
Why This Matters for Systems Science
Re-embedding information and agency in physics brings fresh clarity to systems thinking. It explains why order must be sustained by flows, why “effort” feels costly, and why every form of coordination, from metabolism to governance, depends on continual energy input. It also offers a bridge between natural and social sciences: the same thermodynamic grammar governs both.
As Ilya Prigogine showed, local order can grow even while global entropy rises. Life, mind, and society are all such dissipative structures, islands of organisation maintained by throughputs of energy and information. Understanding this continuity reminds us that progress itself carries an energetic price.
From Theory to Application
Recognising the physical nature of information could reshape how we approach education, technology, and governance. Policies and systems that ignore their energetic base risk collapse; those that respect it can harness energy more efficiently to sustain informational order.
Energy is the means, information the form, and agency the dance between them. Seen thermodynamically, every act of understanding is a small victory over entropy; a local flowering of order in the great energetic flow from the Sun.
The “Extended Framework for a General Systems Theory” (EFGST) builds upon the original “Framework for a General Systems Theory” that I first released several months ago. The original framework provided a structured, cross-disciplinary approach to understanding systems, their properties, and the causal processes that drive them.
This new Extended Framework expands this foundation with several key advances that connect systems theory more tightly to physical and informational dynamics:
Seeds and Contra-Seeds describe how systems can be triggered to develop or decay through reinforcing or opposing influences.
Mobus’s Concept of Systemness, Integrated with the notion of state spaces, helps describe how systems maintain coherence, evolve, and approach attractors.
Troncale’s Linkage Propositions are reinterpreted within EFGST as causal-probabilistic connections that can potentially be used to map probabilities across configuration and state spaces, providing a degree of predictability.
Recomposition provides a new explanatory model for how complex systems build upon rather than replace their components, clarifying how emergence arises in distinct levels.
Together, these extensions bring the framework closer to a unified theory of system dynamics, applicable from physics and biology to social and cognitive systems.
You can read the overview paper and explore the detailed set of definitions and propositions here:
These materials form the foundation for ongoing work towards an integrated General Systems Theory; one that connects the causal, energetic, and informational dimensions of system behaviour.
This article explores two fundamental modes of causal reasoning: TPT (Transfer-Process-Transfer) and PTP (Process-Transfer-Process) structures. These structures help clarify how humans and artificial intelligences like large language models reason about cause and effect, why both are susceptible to error, and why combining them is essential for a robust understanding.
The two forms of reasoning derive from the following:
Causal transfers take time and travelling through any causal network in the direction of the arrow of time will yield a chain of alternating processes and transfers, i.e.: … P – T – P – T – P …
Causes are effects, and effects are causes.
Every system or event in a causal chain shares a component with its predecessor and successor.
The PTP structure equates to an event in which something does something to something else. The TPT structure equates to a system with its inputs, processes and outputs.
TPT Reasoning: Pattern Recognition and Unconscious Inference
TPT causality refers to a structure in which two processes are linked by an inferred or unknown transfer, i.e. each cause and effect has the structure TPT and the two are linked by a common T. In human cognition, this reflects pattern recognition: we notice that two processes frequently co-occur, and infer a causal link, even if we cannot identify what mediates the connection.
This form of reasoning is fast, intuitive, and largely unconscious. It allows us to make rapid inferences from experience, often without awareness of the intermediate mechanisms. However, it is error-prone. TPT reasoning is vulnerable to spurious associations and errors caused by unseen common causes. In these cases, the inferred causal link is false, despite the pattern appearing consistent.
Large language models also rely heavily on TPT-type reasoning. They identify recurring associations in their training data and reproduce those patterns in response. This allows them to answer questions, complete prompts, and simulate explanations even when they do not possess internal models of the causal transfers involved.
PTP Reasoning: Explicit Inference and Conscious Verification
In PTP causality, by contrast, causes and effects consist of a process, a known transfer, and another process. Each cause or effect has a PTP structure and the two are linked by a common P. This represents structured reasoning in which a clearly identified mechanism links cause and effect. In human cognition, this kind of reasoning is associated with conscious, reflective thinking. It is slow, deliberate, and effortful, but less prone to error.
Verification through PTP reasoning is essential when pattern-based inferences (TPT) are in doubt. It allows us to examine whether a supposed cause-effect relationship is supported by identifiable transfers. In systems theory terms, it confirms that the output of one process is indeed the input to another.
Error and Verification in Human and AI Cognition
Both humans and artificial intelligences are vulnerable to error when relying solely on TPT reasoning. A classic example is the post hoc fallacy: assuming that because B follows A, A caused B. Without identifying the actual transfer, such reasoning remains speculative.
AI systems, too, may generate plausible but incorrect answers when their training data contains coincidental patterns. They may infer connections that resemble PTP structures but are not grounded in causality.
This is why PTP reasoning is vital for verification. It distinguishes genuine causal chains from coincidental associations by demanding an explicit causal transfer.
A Unified Framework of Reasoning
A key insight from systems theory is that these two modes of reasoning are not exclusive. In fact, they are complementary. TPT reasoning allows for quick hypothesis generation and intuitive understanding. PTP reasoning provides a structure for verification, deeper analysis, and error correction.
Understanding and integrating both types of causal reasoning is central to building a theory of cognition, both biological and artificial. It also has direct implications for epistemology, systems modelling, and the future of AI development.
Conclusion
TPT and PTP causality offer a powerful lens for interpreting human and artificial thought. TPT supports rapid pattern recognition; PTP ensures that those patterns are grounded in real causal mechanisms. Awareness of this dual structure is essential for improving reasoning, communication, and the development of intelligent systems.
Future work may involve identifying when to trust each mode, and how to better integrate them in education, epistemology, and machine reasoning architectures.
Every system, from molecules to minds to markets, changes over time. These changes are not random. Systems tend to follow patterns: settling into stability, reacting to shocks, and sometimes undergoing deep transformations. One of the most powerful ways to understand this behaviour is through the medium of energy landscapes, a concept that is well established and widely used in physics.
Systems undergo phase transitions, a term borrowed from physics. When water freezes to ice, it experiences a type 1 phase transition; the change occurs almost instantaneously across the entire system. More complex systems, however, typically undergo a type 2 phase transition; one that requires them to traverse an energy landscape, moving step by step between stable states. Over geological time, for example, the Earth has shifted through such a landscape from a predominantly mineral state to a living one and may now be transitioning toward an informational one.
An energy landscape is a conceptual tool that maps all the possible configurations a system can take and shows how stable each of those configurations is. It is not a feature of the system itself, which at any given time exists in just one of those configurations. Instead, it is a representation of the system’s entire configuration space, i.e., the set of all possible arrangements of its components, whether or not those arrangements actually exist. While it is helpful to imagine this landscape in two dimensions, in practice it may have hundreds, thousands, or even millions of dimensions.
A system can be closed; that is, no energy or matter enters or leaves it; open to energy; or open to both energy and matter. The nature of the landscape differs for each. This is explained in more detail in the paper “Framework for a General System Theory” (Challoner, 2025) available at https://rational-understanding.com/2025/05/12/framework-for-a-general-system-theory/.
In the open systems encountered in nature, valleys in the energy landscape represent stable, low-energy states (also called attractors) where systems tend to settle. Hills or peaks are unstable, high-energy states where systems rarely remain for long. Over time, systems “move” across this landscape in response to internal dynamics and external influences.
In this context, internal dynamics refers to changes that arise from within the system itself, without major external shocks. In physical systems, this might be thermal fluctuations or ongoing chemical reactions; in biological systems, metabolic processes or genetic variation; in social systems, demographic shifts, gradual changes in norms and institutions, or structured cycles of change such as Margaret Archer’s Morphogenetic Cycle. Over time, these small, cumulative adjustments can alter the system’s configuration, nudging it toward a new position in its energy landscape.
If left undisturbed, however, most systems drift toward the lowest nearby valley; the most stable state available.
The Structure of Systems and Their Landscapes
To understand what defines a system’s configuration space, we need to know what the system’s components are. Systems theory describes reality as a nested hierarchy; each system is made of subsystems, which are themselves made of smaller subsystems, and so on. Assembly theory offers a compatible view from another angle; it sees every system as built from previously assembled components that themselves have been assembled from simpler, previously assembled parts.
Assembly theory assigns levels of assembly. The simplest structures occupy level 1. Assemblies made from level 1 components occupy level 2, and so on, increasing in complexity. Thus, any system can be described as level n, and composed of level n–1 components. The latter are, in turn, made of level n–2 sub-components, and so on.
The configuration space of a system of level n is defined by the degrees of freedom of its level n–1 components, that is, the independent ways in which they can vary.
Open System Energy Landscapes
An open system energy landscape maps the total energy of a system onto the configuration space of its components. In the simplified three-dimensional visualisation, valleys (low total energy) correspond to stable attractors. They are typically associated with high organisation and high “information at source”. Peaks, on the other hand, are unstable configurations, typically associated with high total energy, low organisation, and low “information at source”.
Figure 1 – An energy landscape visualised as hills and valleys in a two dimensional terrain.
In this framework, “information at source” is equivalent to Schrödinger’s negentropy, i.e., the degree to which a system’s entropy is less than its maximum possible value. Thus, in an open system energy landscape, valleys correspond to high-negentropy states, while peaks correspond to high-entropy states.
Static and Dynamic Landscapes
In open systems that are closed to mass but open to energy the landscape is relatively static. As energy enters or leaves a system its energy landscape moves up or down whilst retaining the same overall profile. An example that approximates to such a system is the Earth as a whole, which receives energy from the Sun but gains little matter.
However, not all energy landscapes are equally stable. Systems open to both energy and mass have landscapes that are dynamic, shifting like the surface of a storm-driven ocean. In such systems, attractors can deepen, vanish, or be replaced as new matter and energy flow in or out.
Natural systems such as coastal estuaries, and social systems such as globalised manufacturing, both illustrate how being open to energy and mass makes a landscape dynamic. In an estuary, tides, storms, and seasonal floods bring new sediment, nutrients, and species, reshaping which ecological communities dominate. In manufacturing, new technologies, raw materials, and workforce movements can build new industrial hubs or undermine existing ones. In both cases, stable configurations, ecological communities or production networks are attractors, but these can deepen, vanish, or be replaced entirely as continual flows of matter and energy reshape the landscape.
How Systems Traverse a Landscape
Over their lifecycles, open systems tend to shift into progressively deeper valleys, i.e., more complex and stable forms of organisation, until they are constrained by internal limits such as resource shortages or diverted by external shocks. Initially, a collection of components is only a subcritical structure; it lacks the emergent properties necessary for the novel functions and outputs lacked by its parts. As organisation increases, it may become a sub-optimal system, i.e., one that has an emergent function, but not yet enough structure to deliver outputs efficiently. Further organisation can lead to an optimal state, where the energy used for structural maintenance and the energy used for output are balanced to maximise performance. Beyond this point, the system becomes super-optimal; any additional complexity may draw too much energy into self-maintenance, reducing output and eventually leading to collapse if maintenance demands outstrip available energy.
Systems can also oscillate around an attractor, making continual small adjustments to remain stable. In real-world settings, such oscillations often produce repeating cycles, e.g., periods of growth followed by contraction, tension followed by resolution, or stability punctuated by brief disruptions. Over time, these cycles can reinforce the system’s current organisation, allowing it to return to the same attractor after each disturbance, a tendency known in systems theory as equifinality. However, if the oscillations amplify or are combined with large external shocks, the system may break from its cycle and transition into a different valley entirely, reorganising around a new attractor, a process referred to as multifinality. In social and ecological systems, such transitions may take the form of reorganisations, revolutions, or collapses.
Fractality in Energy Landscapes
Energy landscapes are often fractal. That is, similar patterns appear at different locations and scales. This arises because many configurations are variations of others. For example, components may be identical, allowing them to be interchanged without altering the whole, so different areas of the landscape share the same pattern. In addition, systems frequently assemble recursively, meaning that smaller subsystems are built in the same way as the larger system they belong to. This repetition of assembly patterns across levels produces repeating structures in the landscape itself: the routes to forming a subsystem resemble the routes to forming the whole, creating self-similar pathways and clusters of attractors at multiple scales.
This fractal nature means that, as a system traverses its energy landscape, patterns of change it has followed before may reappear later in its life, and often at different scales. Because similar configurations and pathways exist in multiple locations across the landscape, the system can encounter familiar transitions in new contexts. This is why history can sometimes guide our expectations, although the self-similarity of the landscape never guarantees identical outcomes. For example, in ecology, the process by which vegetation colonises bare ground after a small landslide can resemble the much larger-scale succession that occurs after a volcanic eruption. The sequence of pioneer species, intermediate communities, and mature forest repeats the same general pattern, even though the scale, timing, and specific species differ. Similarly, in economics, a localised boom-and-bust cycle in a single industry can follow the same trajectory as a national economic cycle, but on a smaller scale and over a shorter period.
This fractal nature also means that systems can become trapped in “valleys within hilltops”. That is, zones of local stability nested inside larger instabilities. In such cases, a system may appear stable in the short term while, in reality, the broader configuration it occupies is unstable and heading toward change or collapse. For example, a government may maintain political stability through a fragile coalition, yet the entire national system faces deepening economic and environmental crises that will eventually destabilise it. Similarly, a supercooled liquid can remain in a seemingly stable state until the slightest disturbance triggers a complete and irreversible phase change.
From Physics to Society
The concept of energy landscapes is not limited to physics or chemistry. Social systems, such as international relations, also move across landscapes defined by stability and change. These systems are open to new energy in the form of ideas, movements, and crises, but largely closed to new matter, since nations rarely appear or disappear. Like physical systems, they experience periods of self-maintenance, oscillation, disruption, and transformation. And just as in natural systems, their landscapes can be reshaped by sustained flows of energy or sudden shocks.
Conclusion
Energy landscapes offer a way to see not just where a system is, but how it might change. They explain why systems settle into certain patterns, why some shocks cause sudden transitions while others do not, and why some paths are easier to follow than others. They also show how patterns can repeat, recombine, and evolve over time. By viewing systems through this lens, and by recognising that landscapes themselves can shift, we gain a powerful method for thinking about change in everything from molecules to markets.
I was pleased to present my paper “Exploring Poly-Perspectivism: Using Multiple Perspectives for a More Comprehensive Understanding of Reality” at the ISSS 2025 Conference.
The work explores how we can engage with diverse perspectives more productively without collapsing them into a single truth or drifting into relativism. It introduces a new meta-framework that evaluates perspectives by the human needs they satisfy or the harms they help prevent, offering a human-centred complement to systems science.
If you’re interested in interdisciplinary collaboration, epistemic coordination, or the cognitive dynamics behind complex decision-making, this work may be of interest. You can download the following:
The full paper
A glossary of key terms
A list of key propositions
Guidance on overcoming personal blind spots
A summary of Motivated Symbolic Interpretation Theory
Across a century of psychology, communication theory, and leadership research, the same insight keeps re-emerging: human cognition is triadic. Freud called it the Id, Ego, and Superego. Eric Berne described Child, Parent, and Adult ego states. More recently, systems thinkers speak of Ego, Eco, and Intuitive Intelligence.
Each of these frameworks highlights a different aspect of a common truth: the human mind is a layered system shaped by evolution, motivation, and reflexivity. We are driven by instinct, shaped by society, and guided by reflection. Understanding how these layers work can help us communicate better, make peace with ourselves, and grow as individuals and communities.
In my new article, I explore this recurring cognitive triad and its evolutionary foundations. I show how it maps onto brain structures, motivational needs (via Alderfer’s ERG theory), and modes of interpersonal communication. It also shows us how reflexivity and observation give us the tools to navigate these inner voices constructively.
Why do some words open doors while others close them? Why do some images attract and others repel? Why are some ideas welcomed and others dismissed; not because of their merit, but because of how they’re framed?
Over the past few months, I’ve been developing a theory that helps explain exactly that. It’s called Motivated Symbolic Interpretation Theory (MSIT). It explores how certain words, phrases, images, and symbols may, in the past, have become associated with satisfying or frustrating experiences, and how these associations shape our responses to new information, often before we’re even aware of it.
The theory is easily understood, and is outlined in a concise summary document that introduces its core definitions and propositions. It’s a practical, cross-disciplinary idea with applications in communication, education, psychology, therapy, and personal relationships.
This is just the beginning. I’m working on a fuller explanation, with examples and practical tools to help people use the theory to improve clarity, trust, and understanding in everyday life.
This paper, freely downloadable at https://rational-understanding.com/UUDH#framework, presents a comprehensive framework for understanding systems across all domains of complexity: physical, biological, cognitive, and social. The framework builds upon, unifies, and extends classical systems science by grounding systemic behaviour in open system thermodynamics, energy landscapes, systems causality, and recursive emergence. At its core lies the concept of information at source: a measure of internal recursively structured order, and its dynamic relationship with energy and entropy.
Systems are defined by the emergence of properties absent from their components, and their operation depends on the balance between energy available for maintaining internal structure and that required for exercising function. The framework explains how systems form, persist, collapse, or evolve by stabilising in attractor basins within energy landscapes, scaling recursively through fractal architecture.
Sets of formal definitions and propositions, whose provenance is given, underpin the theory, offering a structured, logically coherent, and cross-disciplinary model. The framework unifies foundational work by von Bertalanffy, Ashby, Beer, Bateson, Prigogine, Rosen, and others. It also incorporates more recent developments by Bhaskar, Cronin and Walker, Parisi, and the author.
This paper entitled “Systems Causality, Assembly Theory and the Discrete Accumulation of Negentropy” explains why, despite the prevalence of entropy, decay and disorganisation, the universe is essentially creative. It also gives meaning and purpose to human existence from a scientific perspective, and so, challenges existential nihilism. It is deliberately written in plain English, and I have explained and defined any unavoidable technical terms. You can download a pdf free of charge via the following links:
The paper was written to help the International Society for the Systems Sciences in their search for a General System Theory. So, it draws together many systems related concepts , i.e., basic systems theory, causality, information, entropy, negentropy, emergence, Big History, why multiple scientific disciplines employing different laws are necessary and, and so on.
I see these concepts as applying to us in our day-to-day lives and this work will therefore help me a in developing social systems theory. So, that is what I plan to return to now.
Abstract
The Second Law of Thermodynamics states that entropy, or disorder, increases in closed systems. However, the observable universe has, over time, produced increasingly complex structured entities, from atoms and molecules to living organisms and civilisations. This paper explores the mechanisms behind this phenomenon, known as the accumulation of negentropy. That is, the growth of order despite the natural tendency toward disorder.
It is proposed that the accumulation of negentropy is not a separate force but rather a consequence of causal interactions whose structured complexity has increased over time. These interactions follow the principles of Systems Causality, where cause-and-effect relationships are shaped by the transfer of matter, energy, and information. Assembly Theory provides an explanation for the step-by-step emergence of ever more complex structured entities, including causal relationships, within the constraints of prior structures. It also explains the emergence of new laws and scientific disciplines as complexity increases.
Using this framework, the paper analyses how causality has driven the emergence of increasingly complex structured entities throughout Big History, from quantum fluctuations and chemical selection to biological evolution and human civilisation. It also examines the implications for humanity today.