Do Cultures evolve?

Herbert Spencer (1857) argued that cultures “evolve” before Darwin revealed the mechanism for how species evolve; and for 150 years, “social evolution” has been part of our lexicon. Certainly societies change over time, but there has always been a nagging concern that social evolution is not based on quite the same principles as biological evolution. The problem, of course, is that biological evolution is defined and understood in terms of genes and populations of reproducing organisms. These concepts are not easy to import into discussions of social change. Dawkins (in The Selfish Gene) introduced the term “meme” to represent the cultural equivalent of a gene in societal contexts. But, Dawkins never intended for the concept to have a rigorous meaning, and indeed it has never been possible to build the social science equivalent of population genetics based on memes.

In The Engine of Complexity, Evolution as Computation, I argue the fundamental mechanism that makes biological evolution possible is an information processing strategy, i.e. a computation (which I call the engine of complexity). This computational strategy is iterated (cyclic) and is conveniently presented visually. In the book, I further suggest that when a question arises as to whether some process is evolutionary in the Darwinian sense the question can be settled by determining whether or not that process incorporates the engine of complexity computation. For example, various authors have loosely referred to the developmental processes that create the brain to be “evolutionary.” Careful examination of brain development shows that brain development does not incorporate the engine of complexity computation; so though the outcome is impressive, the strategy used is different from that which underlies biological evolution (discussed in Chapter 7 of The Engine of Complexity).

In contrast, when social evolution is examined, it is possible to identify the engine of complexity computation working at the heart of many aspects of social change. An appropriately labeled diagram taken from the book (Figure 11-1) is shown below.

 Mayfield Figure 11-1

In this diagram, concepts are informational, while outcomes result from that information. Thus, a concept might be a business plan whose outcome is a business, a scientific hypothesis whose outcome is explanation of a series of experiments, or religious strictures whose outcomes are human behaviors. The diagram has exactly the same form as a diagram of the engine of complexity computation (see A General Theory of Evolution blog).

Because they can be described in exactly the same way, I conclude that many aspects of cultural change happen by means of mechanisms that utilizes the same computational strategy as biological evolution. This is despite the obvious fact that information in these systems is encoded in completely different ways. Placing biological evolution and cultural evolution on the same theoretical foundation creates unity in our understanding and establishes from a computational perspective how the impressive accomplishments of these two rather different systems are possible.

Cosmos

Sunday night I watched the first episode of Cosmos – A Spacetime odyssey (Fox at 8 Central Time, and National Geographic Mondays). I thought it was pretty good and refreshing to see a program on commercial TV that doesn’t shy away from the reality of evolution. One review likened it to an hour long ad for science. In a segment of the program where the host Neil deGrasse Tyson illustrates the age of the universe (13.8 billion years) by compressing it into a twelve month calendar, humans have been around about 40 minutes and human history since Galileo (400 years) is the last 9 seconds. He then makes the interesting observation that the incredible changes that have occurred in our corner of the universe in that last brief 9 seconds is a tribute to the “incredible power of the scientific method.” I wonder if he realizes that this incredible power is based on the same computational strategy that is also makes possible the evolution of life? We can watch future episodes to see if he makes the connection.

Introduction to Instructions

A subject one reads surprisingly little about is the centrality of instructions to understanding the world we live in. The concept of instruction use is at the same time both obvious and profound. All objects (and behaviors) can be divided into two categories, I call them type I and type II. Type II things require instructions for their formation, without instructions they never come into being. All type II things are products of living organisms (including the organisms themselves) or products of human ingenuity and inventiveness. Type I encompasses everything else. I have no doubt that if we ever discover intelligent life elsewhere in the universe, the worlds of those creatures will also be characterized by type II objects.  Why do I say this? Because without instructions, natural processes are too restrictive, too many complex (and useful) things simply cannot be achieved. The laws of chemistry and physics allow the formation of many wondrous things, but without instructions, most possibilities are beyond reach.

Type II objects are no less dependent on the laws of chemistry and physics than type I objects, but they also require something else, extra information for their formation. I like to call this extra information “instructions”, but algorithms, recipes, or blueprints also describe it. I think it is fair to say that anything permitted by the laws of chemistry and physics can be achieved if the proper instructions can be acquired and used. Without instructions, the possibilities are much more limited. Without using instructions, the universe has produced galaxies, stars, planets, weather, geology, even originated life. But the intricacies we observe in our world today are mostly dependent on instructions.

On the planet earth, instruction use is everywhere: all products of human technology require them and all living things are dependent on them (in the form of DNA). Without instructions, the Earth would be like Mars with water.

Recognition of the central role of instructions in our world, immediately raises the question of where instructions come from. For living organisms, the answer is pretty clear, every organism carries DNA molecules within its cells that encode information (instructions) required for the organism’s creation. The origin of those DNA sequences is three and a half billion years of evolution. For human made instructions, the answer seems at first blush to be even clearer, people thought them up. But is that really an explanation? “Thinking up” is not a recognized scientific mechanism. To understand the mechanism, we need to understand how the human brain works, and that is understanding we don’t currently have.

One can deduce something about the relevant brain processes by examining the requirements for producing new instructions. Instructions are characterized by information and long instructions encode a lot of it. Information must be accounted for, you can’t just conjure it up out of thin air. Computational theory tells us that information can come from just two sources: preexisting information and randomness. Randomness can be thought of as encompassing all possibilities, but is an unwieldy hard to use source. Preexisting information is much easier to use, but it is limited to things that are already known. Clearly human made instructions include things that were not known long ago. This observation suggests that the brain has a mechanism for acquiring new information (i.e. new to the world or even to the universe).

The evolutionary computation is a powerful way of extracting useful information from randomness. This is illustrated by the one-max problem whose behavior is explained on pages 153-158 of my book The Engine of Complexity, Evolution as Computation. In fact, evolutionary computation is such a powerful mechanism for extracting useful information from randomness, if the human brain employs this strategy, that fact alone would largely solve the problem of how we are able to create new instructions.

A general theory of evolution

A General Theory of Evolution

Various phenomena that closely resemble biological evolution are observed to operate in arenas not directly associated with the modification and creation of new life forms, i.e. the traditional focus of evolution study. Among these are the refinement of antibodies produced by the body in response to infection, social and technological changes of all kinds, and evolutionary computer algorithms. Each can be formally described in computational terms.  Doing so allows one to identify a process common to all such processes. I call this process the “engine of complexity,” and it constitutes nothing less than a general theory of evolution – by this I mean a theory not limited to biology. When one accepts a computational definition of evolution, then biological evolution is naturally seen as a particular implementation, or a special case, of a more general information processing strategy.

It is convenient, at least for non-mathematicians, to present this definition in a visual way.  My favorite (where “inputs” and “outputs” refer to information inputs and outputs to a computation) is shown below.

Figure 5-2 taken from the Engine of Complexity, Evolution as Computation. Evolution diagramed as a cyclic (iterative) computation. The superscripts, t and t+1 indicate the cycle number, and m ≤ n.

Mayfield Figure 5-2

It is easy to see that this definition describes biological evolution by changing the labels. If “inputs” is changed to “parental DNA”, “probabilistic copying” to “reproduction”, “outputs” to “offspring DNA”, and “outcomes” to “offspring”, we have a perfectly good schematic diagram of biological evolution.

An advantage to using a computational definition of evolution is that it divorces the underlying essential process from the messy details of the system in which it operates. Thus, in biology, the fundamental process that enables evolution to perform its magic is the manipulation and accumulation of information encoded in DNA (the inner cycle in the diagram). This inner cycle is the engine of complexity. One wishing to study the basic computational process can ignore the details of plants and animals living in the natural world!

The “outcomes” in the diagram are not a required part of a computational definition of evolution and indeed, one can easily write evolutionary computer algorithms that do not have outcomes that are distinct from outputs. When distinct outcomes are present in a system their performance often plays a central role in selection. Thus in biology, DNA is selected by the survival and reproduction of organisms (outcomes) that carry it. This is essentially the same observation that Richard Dawkins made in The Selfish Gene, when he said, to paraphrase, “bodies are simply vehicles for transmitting their genes to the next generation.”

The process that is depicted as arrows from outputs to outcomes in the diagram is the subject of instructions – a very different topic than that of evolution.

Evolutionary computation is responsible for many of the things we care about. Furthering the conversation begun in the book: The Engine of Complexity.