top of page
begif.jpg

The Beginning of Infinity:

Explanations that Transform the World

by David Deutsch

After finishing this book in November of 2023, I wrote,

 

"Who would have thought a tome on epistemology would make for fascinating reading?! Deutsch made this book a page-turner for me as you might surmise from the copious notes I am sharing with you below."

My clippings below collapse a 487-page book into 19 pages, measured by using 12-point type in Microsoft Word." 

See all my book recommendations.  

Here are the selections I made:

Progress that is both rapid enough to be noticed and stable enough to continue over many generations has been achieved only once in the history of our species. It began at approximately the time of the scientific revolution, and is still under way. It has included improvements not only in scientific understanding, but also in technology, political institutions, moral values, art, and every aspect of human welfare.

 

Some types of transmutation happen spontaneously on Earth, in the decay of radioactive elements. This was first demonstrated in 1901, by the physicists Frederick Soddy and Ernest Rutherford,

 

What had changed? What made science effective at understanding the physical world when all previous ways had failed?

 

But one thing that all conceptions of the Enlightenment agree on is that it was a rebellion, and specifically a rebellion against authority in regard to knowledge.

 

This is why the Royal Society (one of the earliest scientific academies, founded in London in 1660) took as its motto ‘Nullius in verba’, which means something like ‘Take no one’s word for it.’

 

What was needed for the sustained, rapid growth of knowledge was a tradition of criticism. Before the Enlightenment, that was a very rare sort of tradition: usually the whole point of a tradition was to keep things the same.

 

One consequence of this tradition of criticism was the emergence of a methodological rule that a scientific theory must be testable (though this was not made explicit at first).

 

In contrast, the ancient theory that all matter is composed of combinations of the elements earth, air, fire and water was untestable, because it did not include any way of testing for the presence of those components. So it could never be refuted by experiment.

 

As the physicist Richard Feynman said, ‘Science is what we have learned about how to keep from fooling ourselves.’

 

An entire political, moral, economic and intellectual culture – roughly what is now called ‘the West’ – grew around the values entailed by the quest for good explanations, such as tolerance of dissent, openness to change, distrust of dogmatism and authority, and the aspiration to progress both by individuals and for the culture as a whole.

 

Explanation Statement about what is there, what it does, and how and why. Reach The ability of some explanations to solve problems beyond those that they were created to solve. Creativity The capacity to create new explanations. Empiricism The misconception that we ‘derive’ all our knowledge from sensory experience. Theory-laden There is no such thing as ‘raw’ experience. All our experience of the world comes through layers of conscious and unconscious interpretation. Inductivism The misconception that scientific theories are obtained by generalizing or extrapolating repeated experiences, and that the more often a theory is confirmed by observation the more likely it becomes.

 

Induction The non-existent process of ‘obtaining’ referred to above. Principle of induction The idea that ‘the future will resemble the past’, combined with the misconception that this asserts anything about the future. Realism The idea that the physical world exists in reality, and that knowledge of it can exist too. Relativism The misconception that statements cannot be objectively true or false, but can be judged only relative to some cultural or other arbitrary standard. Instrumentalism The misconception that science cannot describe reality, only predict outcomes of observations. Justificationism The misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.

 

Fallibilism The recognition that there are no authoritative sources of knowledge, nor any reliable means of justifyi...

 

Good/bad explanation An explanation that is hard/easy to vary while still accounting for what it purports to account for. The Enlightenment (The beginning of) a way of pursuing knowledge with a tradition of criticism and seeking good explanations instead of reliance on authority. Mini-enlightenment A short-lived tradition of criticism.

 

Rational Attempting to solve problems by seeking good explanations; actively pursuing error-correction by creating criticisms of both existing ideas and new proposals. The West The political, moral, economic and intellectual culture that has been growing around the Enlightenment values of science, reason and freedom.

 

The real source of our theories is conjecture, and the real source of our knowledge is conjecture alternating with criticism. We create theories by rearranging, combining, altering and adding to existing ideas with the intention of improving upon them. The role of experiment and observation is to choose between existing theories, not to be the source of new ones. We interpret experiences through explanatory theories, but true explanations are not obvious. Fallibilism entails not looking to authorities but instead acknowledging that we may always be mistaken, and trying to correct errors.

 

Most ancient accounts of the reality beyond our everyday experience were not only false, they had a radically different character from modern ones: they were anthropocentric. That is to say, they centred on human beings, and more broadly on people – entities with intentions and human-like thoughts – which included powerful, supernatural people such as spirits and gods.

 

Yet I am writing this in Oxford, England, where winter nights are likewise often cold enough to kill any human unprotected by clothing and other technology. So, while intergalactic space would kill me in a matter of seconds, Oxfordshire in its primeval state might do it in a matter of hours – which can be considered ‘life support’ only in the most contrived sense. There is a life-support system in Oxfordshire today, but it was not provided by the biosphere. It has been built by humans. It consists of clothes, houses, farms, hospitals, an electrical grid, a sewage system and so on. Nearly the whole of the Earth’s biosphere in its primeval state was likewise incapable of keeping an unprotected human alive for long.

 

Even the Great Rift Valley in eastern Africa, where our species evolved, was barely more hospitable than primeval Oxfordshire. Unlike the life-support system in that imagined spaceship, the Great Rift Valley lacked a safe water supply, and medical equipment, and comfortable living quarters, and was infested with predators, parasites and disease organisms. It frequently injured, poisoned, drenched, starved and sickened its ‘passengers’, and most of them died as a result.

 

It was similarly harsh to all the other organisms that lived there: few individuals live comfortably or die of old age in the supposedly beneficent biosphere. That is no accident: most populations, of most species, are living close to the edge of disaster and death.

 

Hence the metaphor of a spaceship or a life-support system, is quite perverse: when humans design a life-support system, they design it to provide the maximum possible comfort, safety and longevity for its users within the available resources; the biosphere has no such priorities.

 

The first people to live at the latitude of Oxford (who were actually from a species related to us, possibly the Neanderthals) could do so only because they brought knowledge with them, about such things as tools, weapons, fire and clothing. That knowledge was transmitted from generation to generation not genetically but culturally.

 

To the extent that we are on a ‘spaceship’, we have never been merely its passengers, nor (as is often said) its stewards, nor even its maintenance crew: we are its designers and builders. Before the designs created by humans, it was not a vehicle, but only a heap of dangerous raw materials.

 

This is another reason that ‘one per cent inspiration and ninety-nine per cent perspiration’ is a misleading description of how progress happens: the ‘perspiration’ phase can be automated – just as the task of recognizing galaxies on astronomical photographs was. And the more advanced technology becomes, the shorter is the gap between inspiration and automation.

 

And, while every other organism is a factory for converting resources of a fixed type into more such organisms, human bodies (including their brains) are factories for transforming anything into anything that the laws of nature allow. They are ‘universal constructors

 

As Einstein remarked, ‘My pencil and I are more clever than I.’

 

For instance, at present during any given century there is about one chance in a thousand that the Earth will be struck by a comet or asteroid large enough to kill at least a substantial proportion of all human beings. That means that a typical child born in the United States today is more likely to die as a result of an astronomical event than a plane crash.

 

Here is another misconception in the Garden of Eden myth: that the supposed unproblematic state would be a good state to be in. Some theologians have denied this, and I agree with them: an unproblematic state is a state without creative thought. Its other name is death.

 

Person An entity that can create explanatory knowledge. Anthropocentric Centred on humans, or on persons. Fundamental or significant phenomenon: One that plays a necessary role in the explanation of many phenomena, or whose distinctive features require distinctive explanation in terms of fundamental theories. Principle of Mediocrity ‘There is nothing significant about humans.’ Parochialism Mistaking appearance for reality, or local regularities for universal laws.

 

Spaceship Earth ‘The biosphere is a life-support system for humans.’ Constructor A device capable of causing other objects to undergo transformations without undergoing any net change itself. Universal constructor A constructor that can cause any raw materials to undergo any physically possible transformation, given the right information.

 

Both the Principle of Mediocrity and the Spaceship Earth idea are, contrary to their motivations, irreparably parochial and mistaken. From the least parochial perspectives available to us, people are the most significant entities in the cosmic scheme of things. They are not ‘supported’ by their environments, but support themselves by creating knowledge.

 

Dawkins named his tour-de-force account of neo-Darwinism The Selfish Gene because he wanted to stress that evolution does not especially promote the ‘welfare’ of species or individual organisms. But, as he also explained, it does not promote the ‘welfare’ of genes either: it adapts them not for survival in larger numbers, nor indeed for survival at all, but only for spreading through the population at the expense of rival genes, particularly slight variants of themselves.

 

reach: when the knowledge in a gene happens to have

 

The most general way of stating the central assertion of the neo-Darwinian theory of evolution is that a population of replicators subject to variation (for instance by imperfect copying) will be taken over by those variants that are better than their rivals at causing themselves to be replicated.

 

(To avoid misunderstanding, let me stress that experience provides problems only by bringing already-existing ideas into conflict. It does not, of course, provide theories.)

 

Some philosophers confine the term ‘moral’ to problems about how one should treat other people. But such problems are continuous with problems of individuals choosing what sort of life to lead, which is why I adopt the more inclusive definition.

 

That is a universal system of tallying. But, like levels of emergence, there is a hierarchy of universality. The next level above tallying is counting, which involves numerals. When tallying goats one is merely thinking ‘another, and another, and another’; but when counting them one is thinking ‘forty, forty-one, forty-two…’ It is only with hindsight that we can regard tally marks as a system of numerals, known as the ‘unary’ system.

 

However, in regard to these more sophisticated applications, the system was not universal. Since there was no higher-valued symbol than (one thousand), the numerals from two thousand onwards all began with a string of ’s, which therefore became nothing more than tally marks for thousands.

 

Without error-correction all information processing, and hence all knowledge-creation, is necessarily bounded. Error-correction is the beginning of infinity.

 

Because of the necessity for error-correction, all jumps to universality occur in digital systems.

 

It is why spoken languages build words out of a finite set of elementary sounds: speech would not be intelligible if it were analogue.

 

But then the philosopher Immanuel Kant (1724–1804), who was well aware of the distinction between the absolutely necessary truths of mathematics and the contingent truths of science, nevertheless concluded that Euclid’s theory of geometry was self-evidently true of nature.

 

Both the future of civilization and the outcome of a game of Russian roulette are unpredictable, but in different senses and for entirely unrelated reasons. Russian roulette is merely random. Although we cannot predict the outcome, we do know what the possible outcomes are, and the probability of each, provided that the rules of the game are obeyed. The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created. Hence the possible outcomes are not yet known, let alone their probabilities.

 

Just as no one in 1900 could have foreseen the consequences of innovations made during the twentieth century – including whole new fields such as nuclear physics, computer science and biotechnology – so our own future will be shaped by knowledge that we do not yet have.

 

People in 1900 did not consider the internet or nuclear power unlikely: they did not conceive of them at all.

 

Like Lagrange, Michelson himself had already contributed unwittingly to the new system – in this case with an experimental result. In 1887 he and his colleague Edward Morley had observed that the speed of light relative to an observer remains constant when the observer moves.

 

Observations are theory-laden. Given an experimental oddity, we have no way of predicting whether it will eventually be explained merely by correcting a minor parochial assumption or by revolutionizing entire sciences. We can know that only after we have seen it in the light of a new explanation.

 

Blind optimism is a stance towards the future. It consists of proceeding as if one knows that the bad outcomes will not happen. The opposite approach, blind pessimism, often called the precautionary principle, seeks to ward off disaster by avoiding everything not known to be safe. No one seriously advocates either of these two as a universal policy, but their assumptions and their arguments are common, and often creep into people’s planning.

 

Our Final Century makes the case that the period since the mid twentieth century has been the first in which technology has been capable of destroying civilization. But that is not so. Many civilizations in history were destroyed by the simple technologies of fire and the sword. Indeed, of all civilizations in history, the overwhelming majority have been destroyed, some intentionally, some as a result of plague or natural disaster. Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology, better hygiene, or better political or economic institutions. Very few, if any, could have been saved by greater caution about innovation. In fact most had enthusiastically implemented the precautionary principle. 

 

In 1798, Malthus had argued, in his influential essay On Population, that the nineteenth century would inevitably see a permanent end to human progress.

 

Genetic studies suggest that our own species came close to extinction about 70,000 years ago, as a result of an unknown catastrophe which reduced its total numbers to only a few thousand.

 

Again, Popper was a key advocate of this rejection. He wrote: The question about the sources of our knowledge…has always been asked in the spirit of: ‘What are the best sources of our knowledge – the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist – no more than ideal rulers – and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’ ‘Knowledge without Authority’ (1960)

 

The Medici were soon promoting the new philosophy of ‘humanism’, which valued knowledge above dogma, and virtues such as intellectual independence, curiosity, good taste and friendship over piety and humility.

 

But that rapid progress lasted for only a generation or so. A charismatic monk, Girolamo Savonarola, began to preach apocalyptic sermons against humanism and every other aspect of the Florentine enlightenment.

 

Many citizens were persuaded, and in 1494 Savonarola managed to seize power. He reimposed all the traditional restrictions on art, literature, thought and behaviour. Secular music was banned. Clothing had to be plain. Frequent fasting became effectively compulsory. Homosexuality and prostitution were violently suppressed. The Jews of Florence were expelled. Gangs of ruffians inspired by Savonarola roamed the city searching for taboo artefacts such as mirrors, cosmetics, musical instruments, secular books, and almost anything beautiful. A huge pile of such treasures was ceremonially burned in the so-called ‘Bonfire of the Vanities’ in the centre of the city. Botticelli is said to have thrown some of his own paintings into the fire. It was the bonfire of optimism.

 

For example, the philosopher Roger Bacon (1214–94) is noted for rejecting dogma, advocating observation as a way of discovering the truth (albeit by ‘induction’), and making several scientific discoveries.

 

Optimism (in the sense that I have advocated) is the theory that all failures – all evils – are due to insufficient knowledge.

 

So I suppose that the real difference between the Spartans and us is that their moral education enjoins them to hold their most important ideas immune from criticism. Not to be open to suggestions. Not to criticize certain ideas such as their traditions or their conceptions of the gods; not to seek the truth, because they claim that they already have it.

 

Bad philosophy before the Enlightenment was typically of the because-I-say-so variety.

 

I have mentioned behaviourism, which is instrumentalism applied to psychology.

 

Bad philosophy Philosophy that actively prevents the growth of knowledge. Interpretation The explanatory part of a scientific theory, supposedly distinct from its predictive or instrumental part. Copenhagen interpretation Niels Bohr’s combination of instrumentalism, anthropocentrism and studied ambiguity, used to avoid understanding quantum theory as being about reality. Positivism The bad philosophy that everything not ‘derived from observation’ should be eliminated from science. Logical positivism The bad philosophy that statements not verifiable by observation are meaningless.

 

Before the Enlightenment, bad philosophy was the rule and good philosophy the rare exception. With the Enlightenment came much more good philosophy, but bad philosophy became much worse, with the descent from empiricism (merely false) to positivism, logical positivism, instrumentalism, Wittgenstein, linguistic philosophy, and the ‘postmodernist’ and related movements. In science, the main impact of bad philosophy has been through the idea of separating a scientific theory into (explanationless) predictions and (arbitrary) interpretation. This has helped to legitimize dehumanizing explanations of human thought and behaviour. In quantum theory, bad philosophy manifested itself mainly as the Copenhagen interpretation and its many variants, and as the ‘shut-up-and-calculate’ interpretation. These appealed to doctrines such as logical positivism to justify systematic equivocation and to immunize themselves from criticism. 

 

So if it would be wrong for science to adopt that ‘democratic’ principle, why is it right for politics? Is it just because, as Churchill put it, ‘Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.’

 

The conditions of ‘fairness’ as conceived in the various social-choice problems are misconceptions analogous to empiricism: they are all about the input to the decision-making process – who participates, and how their opinions are integrated to form the ‘preference of the group’. A rational analysis must concentrate instead on how the rules and institutions contribute to the removal of bad policies and rulers, and to the creation of new options.

 

Proportional representation is often defended on the grounds that it leads to coalition governments and compromise policies. But compromises – amalgams of the policies of the contributors – have an undeservedly high reputation. Though they are certainly better than immediate violence, they are generally, as I have explained, bad policies. If a policy is no one’s idea of what will work, then why should it work? But that is not the worst of it. The key defect of compromise policies is that when one of them is implemented and fails, no one learns anything because no one ever agreed with it. Thus compromise policies shield the underlying explanations which do at least seem good to some faction from being criticized and abandoned.

 

This is called the plurality voting system (‘plurality’ meaning ‘largest number of votes’) – often called the ‘first-past-the-post’ system, because there is no prize for any runner-up, and no second round of voting (both of which feature in other electoral systems for the sake of increasing the proportionality of the outcomes). Plurality voting typically ‘over-represents’ the two largest parties, compared with the proportion of votes they receive. Moreover, it is not guaranteed to avoid the population paradox, and is even capable of bringing one party to power when another has received far more votes in total. These features are often cited as arguments against plurality voting and in favour of a more proportional system – either literal proportional representation or other schemes such as transferable-vote systems and run-off systems which have the effect of making the representation of voters in the legislature more proportional. However, under Popper’s criterion, that is all insignificant in comparison with the greater effectiveness of plurality voting at removing bad governments and policies. 

 

Let me trace the mechanism of that advantage more explicitly. Following a plurality-voting election, the usual outcome is that the party with the largest total number of votes has an overall majority in the legislature, and therefore takes sole charge. All the losing parties are removed entirely from power. This is rare under proportional representation, because some of the parties in the old coalition are usually needed in the new one. Consequently, the logic of plurality is that politicians and political parties have little chance of gaining any share in power unless they can persuade a substantial proportion of the population to vote for them. That gives all parties the incentive to find better explanations, or at least to convince more people of their existing ones, for if they fail they will be relegated to powerlessness at the next election. In the plurality system, the winning explanations are then exposed to criticism and testing, because they can be implemented without mixing them with the most important claims of opposing agendas. Similarly, the winning politicians are solely responsible for the choices they make, so they have the least possible scope for making excuses later if those are deemed to have been bad choices. If, by the time of the next election, they are less convincing to the voters than they were, there is usually no scope for deals that will keep them in power regardless.

 

Under proportional representation, there are strong incentives for the system’s characteristic unfairnesses to persist, or to become worse, over time. For example, if a small faction defects from a large party, it may then end up with more chance of having its policies tried out than it would if its supporters had remained within the original party.

 

The very phrase ‘It’s a matter of taste’ is used interchangeably with ‘There is no objective truth of the matter.

 

Just as genes often work together in groups to achieve what we might think of as a single adaptation, so there are memeplexes consisting of several ideas which can, alternatively, be thought of as a single more complex idea, such as quantum theory or neo-Darwinism.

 

But merely being present in a mind does not automatically get a meme expressed as behaviour: the meme has to compete for that privilege with other ideas – memes and non-memes, about all sorts of subjects – in the same mind. And merely being expressed as behaviour does not automatically get the meme copied into a recipient along with other memes: it has to compete for the recipients’ attention and acceptance with all sorts of behaviours by other people, and with the recipient’s own ideas. All that is in addition to the analogue of the type of selection that genes face, each meme competing with rival versions of itself across the population, perhaps by containing the knowledge for some useful function.

 

And countless individuals have been harmed or killed by adopting memes that were bad for them – such as irrational political ideologies or dangerous fads.

 

Also, memes can be passed to people other than the holders’ biological descendants. Those factors make meme evolution enormously faster than gene evolution, which partly explains how memes can contain so much knowledge.

 

Hence the frequently cited metaphor of the history of life on Earth, in which human civilization occupies only the final ‘second’ of the ‘day’ during which life has so far existed, is misleading. In reality, a substantial proportion of all evolution on our planet to date has occurred in human brains. And it has barely begun. The whole of biological evolution was but a preface to the main story of evolution, the evolution of memes.

 

The post-Enlightenment West is the only society in history that for more than a couple of lifetimes has ever undergone change rapid enough for people to notice.

 

But, while a society lasted, all important areas of life seemed changeless to the participants: they could expect to die under much the same moral values, personal lifestyles, conceptual framework, technology and pattern of economic production as they were born under. And, of the changes that did occur, few were for the better. I shall call such societies ‘static societies’: societies changing on a timescale unnoticed by the inhabitants.

 

For a society to be static, something else must be happening as well. One thing my story did not take into account is that static societies have customs and laws – taboos – that prevent their memes from changing.

 

That is why the enforcement of the status quo is only ever a secondary method of preventing change – a mopping-up operation. The primary method is always – and can only be – to disable the source of new ideas, namely human creativity. So static societies always have traditions of bringing up children in ways that disable their creativity and critical faculties. That ensures that most of the new ideas that would have been capable of changing the society are never thought of in the first place.

 

They not only enact those memes: they see themselves as existing only in order to enact them. So, not only do such societies enforce qualities such as obedience, piety and devotion to duty, their members’ sense of their own selves is invested in the same standards. People know no others. So they feel pride and shame, and form all their aspirations and opinions, by the criterion of how thoroughly they subordinate themselves to the society’s memes.

 

Just as organisms are the tools of genes, so individuals are used by memes to achieve their ‘purpose’ of spreading themselves through the population.

 

For instance, when the Black Death plague destabilized the static societies of Europe in the fourteenth century, the new ideas for plague-prevention that spread best were extremely bad ones.

 

Thus, ironically, there is much truth in the typical static-society fear that any change is much more likely to do harm than good. A static society is indeed in constant danger of being harmed or destroyed by a newly arising dysfunctional meme. However, in the aftermath of the Black Death a few true and functional ideas did also spread, and may well have contributed to ending that particular static society in an unusually good way (with the Renaissance).

 

Since static societies cannot exist without effectively extinguishing the growth of knowledge, they cannot allow their members much opportunity to pursue happiness. (Ironically, creating knowledge is itself a natural human need and desire, and static societies, however primitive, ‘unnaturally’ suppress it.) From the point of view of every individual in such a society, its creativity-suppressing mechanisms are catastrophically harmful. Every static society must leave its members chronically baulked in their attempts to achieve anything positive for themselves as people, or indeed anything at all, other than their meme-mandated behaviours. It can perpetuate itself only by suppressing its members’ self-expression and breaking their spirits, and its memes are exquisitely adapted to doing this.

 

Rational and anti-rational memes Thus, memes of this new kind, which are created by rational and critical thought, subsequently also depend on such thought to get themselves replicated faithfully. So I shall call them rational memes. Memes of the older, static-society kind, which survive by disabling their holders’ critical faculties, I shall call anti-rational memes.

 

The more accurately the hobgoblin’s attributes exploit genuine, widespread vulnerabilities of the human mind, the more faithfully the anti-rational meme will propagate. If the meme is to survive for many generations, it is essential that its implicit knowledge of these vulnerabilities be true and deep. But its overt content – the idea of the hobgoblin’s existence – need contain no truth. On the contrary, the non-existence of the hobgoblin helps to make the meme a better replicator, because the story is then unconstrained by the mundane attributes of any genuine menace, which are always finite and to some degree combatable.

 

But failure need not be permanent in a world in which all evils are due to lack of knowledge. We failed at first to notice the non-existence of a force of gravity. Now we understand it. Locating hang-ups is, in the last analysis, easier.

 

Another thing that should make us suspicious is the presence of the conditions for anti-rational meme evolution, such as deference to authority, static subcultures and so on. Anything that says ‘Because I say so’ or ‘It never did me any harm,’ anything that says ‘Let us suppress criticism of our idea because it is true,’ suggests static-society thinking. We should examine and criticize laws, customs and other institutions with an eye to whether they set up conditions for anti-rational memes to evolve. Avoiding such conditions is the essence of Popper’s criterion.

 

We now have to accept, and rejoice in bringing about, our next transformation: to active agents of progress in the emerging rational society – and universe.

 

Today, the creativity that humans use to improve ideas is what pre-eminently sets us apart from other species. Yet for most of the time that humans have existed it was not noticeably in use.

 

The real situation is that people need inexplicit knowledge to understand laws and other explicit statements, not vice versa.

 

If a parrot had copied snatches of Popper’s voice at a lecture, it would certainly have copied them with his Austrian accent: parrots are incapable of copying an utterance without its accent. But a human student might well be unable to copy it with the accent.

 

As I said, imitation is not at the heart of human meme replication.

 

Suppose that the lecturer had repeatedly returned to a certain key idea, and had expressed it with different words and gestures each time. The parrot’s (or ape’s) job would be that much harder than imitating only the first instance; the student’s much easier, because to a human observer each different way of putting the idea would convey additional knowledge.

 

Using our explanations, we ‘see’ right through the behaviour to the meaning.

 

The horror of static societies, which I described in the previous chapter, can now be seen as a hideous practical joke that the universe played on the human species. Our creativity, which evolved in order to increase the amount of knowledge that we could use, and which would immediately have been capable of producing an endless stream of useful innovations as well, was from the outset prevented from doing so by the very knowledge – the memes – that that creativity preserved. The strivings of individuals to better themselves were, from the outset, perverted by a superhumanly evil mechanism that turned their efforts to exactly the opposite end: to thwart all attempts at improvement; to keep sentient beings locked in a crude, suffering state for eternity. Only the Enlightenment, hundreds of thousands of years later, and after who knows how many false starts, may at last have made it practical to escape from that eternity into infinity.

 

As for the statues being ‘vivid evidence of…artistic skills’, Bronowski was having none of that either. To him they were vivid evidence of failure, not success:

 

The critical question about these statues is, Why were they all made alike? You see them sitting there, like Diogenes in their barrels, looking at the sky with empty eye-sockets, and watching the sun and the stars go overhead without ever trying to understand them. When the Dutch discovered this island on Easter Sunday in 1722, they said that it had the makings of an earthly paradise. But it did not. An earthly paradise is not made by this empty repetition…These frozen faces, these frozen frames in a film that is running down, mark a civilization which failed to take the first step on the ascent of rational knowledge.

 

The statues were all made alike because Easter Island was a static society. It never took that first step in the ascent of man – the beginning of infinity.

 

You have to live the solution, and to set about solving the new problems that this creates. It is because of this unsustainability that the island of Britain, with a far less hospitable climate than the subtropical Easter Island, now hosts a civilization with at least three times the population density that Easter Island had at its zenith, and at an enormously higher standard of living. Appropriately enough, this civilization has knowledge of how to live well without the forests that once covered much of Britain.

 

The Easter Islanders’ culture sustained them in both senses. This is the hallmark of a functioning static society. It provided them with a way of life; but it also inhibited change: it sustained their determination to enact and re-enact the same behaviours for generations. It sustained the values that placed forests – literally – beneath statues. And it sustained the shapes of those statues, and the pointless project of building ever more of them.

 

And, if the prevailing theory is true, the Easter Islanders started to starve before the fall of their civilization. In other words, even after it had stopped providing for them, it retained its fatal proficiency at sustaining a fixed pattern of behaviour. And so it remained effective at preventing them from addressing the problem by the only means that could possibly have been effective: creative thought and innovation. Attenborough regards the culture as having been very valuable and its fall as a tragedy. Bronowski’s view was closer to mine, which is that since the culture never improved, its survival for many centuries was a tragedy, like that of all static societies.

 

I have argued that the laws of nature cannot possibly impose any bound on progress: by the argument of Chapters 1 and 3, denying this is tantamount to invoking the supernatural. In other words, progress is sustainable, indefinitely. But only by people who engage in a particular kind of thinking and behaviour – the problem-solving and problem-creating kind characteristic of the Enlightenment. And that requires the optimism of a dynamic society.

 

In early prehistory, populations were tiny, knowledge was parochial, and history-making ideas were millennia apart. In those days, a meme spread only when one person observed another enacting it nearby, and (because of the staticity of cultures) rarely even then. So at that time human behaviour resembled that of other animals, and much of what happened was indeed explained by biogeography.

 

But developments such as abstract language, explanation, wealth above the level of subsistence, and long-range trade all had the potential to erode parochialism and hence to give causal power to ideas.

 

In reality, the difference between Sparta and Athens, or between Savonarola and Lorenzo de’ Medici, had nothing to do with their genes; nor did the difference between the Easter Islanders and the imperial British. They were all people – universal explainers and constructors. But their ideas were different. Nor did landscape cause the Enlightenment.

 

It would be much truer to say that the landscape we live in is the product of ideas. The primeval landscape, though packed with evidence and therefore opportunity, contained not a single idea. It is knowledge alone that converts landscapes into resources, and humans alone who are the authors of explanatory knowledge and hence of the uniquely human behaviour called ‘history’.

 

The Easter Island civilization collapsed because no human situation is free of new problems, and static societies are inherently unstable in the face of new problems.

 

So there is no resource-management strategy that can prevent disasters, just as there is no political system that provides only good leaders and good policies, nor a scientific method that provides only true theories. But there are ideas that reliably cause disasters, and one of them is, notoriously, the idea that the future can be scientifically planned. The only rational policy, in all three cases, is to judge institutions, plans and ways of life according to how good they are at correcting mistakes: removing bad policies and leaders, superseding bad explanations, and recovering from disasters. 

 

For example, one of the triumphs of twentieth-century progress was the discovery of antibiotics, which ended many of the plagues and endemic illnesses that had caused suffering and death since time immemorial. However, it has been pointed out almost from the outset by critics of ‘so-called progress’ that this triumph may only be temporary, because of the evolution of antibiotic-resistant pathogens. This is often held up as an indictment of – to give it its broad context – Enlightenment hubris. We need lose only one battle in this war of science against bacteria and their weapon, evolution (so the argument goes), to be doomed, because our other ‘so-called progress’ – such as cheap worldwide air travel, global trade, enormous cities – makes us more vulnerable than ever before to a global pandemic that could exceed the Black Death in destructiveness and even cause our extinction. 

 

But all triumphs are temporary. So to use this fact to reinterpret progress as ‘so-called progress’ is bad philosophy. The fact that reliance on specific antibiotics is unsustainable is only an indictment from the point of view of someone who expects a sustainable lifesty...

 

The prophetic approach can see only what one might do to postpone disaster, namely improve sustainability: drastically reduce and disperse the population, make travel difficult, suppress contact between different geographical areas. A society which did this would not be able to afford the kind of scientific research that would lead to new antibiotics. Its members would hope that their lifestyle would protect them instea...

 

There is a saying that an ounce of prevention equals a pound of cure. But that is only when one knows what to prevent. No precautions can avoid problems that we do not yet foresee. To prepare for those, there is nothing we can do but increase our ability to put things right if they go wrong. Trying to rely on the sheer good luck of avoiding bad outcomes indefinitely would simply guarantee that we would eventually fail without the means of recovering.

 

‘This is Earth. Not the eternal and only home of mankind, but only a starting point of an infinite adventure. All you need do is make the decision [to end your static society]. It is yours to make.’ [With that decision] came the end, the final end of Eternity.– And the beginning of Infinity. Isaac Asimov, The End of Eternity (1955)

 

The first person to measure the circumference of the Earth was the astronomer Eratosthenes of Cyrene, in the third century BCE. His result was fairly close to the actual value, which is about 40,000 kilometres. For most of history this was considered an enormous distance, but with the Enlightenment that conception gradually changed, and nowadays we think of the Earth as small.

 

Thus, in regard to the geography of the universe and to our place in it, the prevailing world view has rid itself of some parochial misconceptions. We know that we have explored almost the whole surface of that formerly enormous sphere; but we also know that there are far more places left to explore in the universe (and beneath the surface of the Earth’s land and oceans) than anyone imagined while we still had those misconceptions. In regard to theoretical knowledge, however, the prevailing world view has not yet caught up with Enlightenment values. Thanks to the fallacy and bias of prophecy, a persistent assumption remains that our existing theories are at or fairly close to the limit of what it is knowable – that we are nearly there, or perhaps halfway there. As the economist David Friedman has remarked, most people believe that an income of about twice their own should be sufficient to satisfy any reasonable person, and that no genuine benefit can be derived from amounts above that.

 

As with wealth, so with scientific knowledge: it is hard to imagine what it would be like to know twice as much as we do, and so if we try to prophesy it we find ourselves just picturing the next few decimal places of what we already know.

 

Perhaps a more practical way of stressing the same truth would be to frame the growth of knowledge (all knowledge, not only scientific) as a continual transition from problems to better problems, rather than from problems to solutions or from theories to better theories.

 

Optimism and reason are incompatible with the conceit that our knowledge is ‘nearly there’ in any sense, or that its foundations are. Yet comprehensive optimism has always been rare, and the lure of the prophetic fallacy strong. But there have always been exceptions. Socrates famously claimed to be deeply ignorant. And Popper wrote: I believe that it would be worth trying to learn something about the world even if in trying to do so we should merely learn that we do not know much…It might be well for all of us to remember that, while differing widely in the various little bits we know, in our infinite ignorance we are all equal. Conjectures and Refutations (1963) Infinite ignorance is a necessary condition for there to be infinite potential for knowledge. 

 

The Royal Society, for instance, was founded in 1660 – a development that would hardly have been conceivable a generation earlier.

 

Roy Porter marks 1688 as the beginning of the English Enlightenment.

_020231213N.jpg
bottom of page