Tải bản đầy đủ - 0 (trang)
The evolutionary future of man - A biological view of progress.rtf

The evolutionary future of man - A biological view of progress.rtf

Tải bản đầy đủ - 0trang

special case of "co-evolution". Co-evolution occurs whenever the environment in which creatures evolve is

itself evolving. From an antelope's point of view, lions are part of the environment like the weather--with the

important difference that lions evolve.

Virtual progress

I want to suggest a new kind of co-evolution which, I believe, may have been responsible for one of the most

spectacular examples of progressive evolution: the enlargement of the human brain. At some point in the

evolution of brains, they acquired the ability to simulate models of the outside world. In its advanced forms we

call this ability "imagination. " It may be compared to the virtual-reality software that runs on some computers.

Now here is the point I want to make. The internal "virtual world" in which animals live may in effect become a

part of the environment, of comparable importance to the climate, vegetation, predators and so on outside. If

so, a co-evolutionary spiral may take off, with hardware--especially brain hardware--evolving to meet

improvements in the internal "virtual environment." The changes in hardware then stimulate improvements in

the virtual environment, and the spiral continues.

The progressive spiral is likely to advance even faster if the virtual environment is put together as a shared

enterprise involving many individuals. And it is likely to reach breakneck speeds if it can accumulate

progressively over generations. Language and other aspects of human culture provide a mechanism whereby

such accumulation can occur. It may be that brain hardware has co-evolved with the internal virtual worlds

that it creates. This can be called hardware-software co-evolution. Language could be both a vehicle of this

co-evolution and its most spectacular software product. We know almost nothing of how language originated,

since it started to fossilise only very recently, in the form of writing. Hardware has been fossilising for much

longer--at least the brain's bony outer casing has. Its steadily increasing size, indicating a corresponding

increase in the size of the brain itself, is what I want to turn to next.

It is almost certain that modern Homo sapiens (which dates only from about 100,000 years ago) is

descended from a similar species, H. erectus, which first appeared a little before 1.6m years ago. It is thought

that H. erectus, in turn, was descended from some form of Australopithecus. A possible candidate which lived

about 3m years ago is Australopithecus afarensis, represented by the famous "Lucy." These creatures, which

are often described as upright-walking apes, had brains about the size of a chimpanzee's. Figure 1 on the

next page shows pictures of the three skulls, in chronological order. Presumably the change from

Australopithecus to erectus was gradual. This is not to say that it took 1.5m years to accomplish at a uniform

rate. It could easily have occurred in fits and starts. The same goes for the change from erectus to sapiens.

By about 300,000 years ago, we start to find fossils that are called "archaic H. sapiens", largish-brained

people like ourselves but with heavy brow ridges more like H. erectus.

It looks, in a general way, as though there are some progressive changes running through this series. Our

braincase is nearly twice the size of erectus's; and erectus's braincase, in turn, is about twice the size of that

of Australopithecus afarensis. This impression is vividly illustrated in the next picture, which was prepared

using a program called Morph.*

To use Morph, you supply it with a starting picture and an ending picture, and tell it which points on the

starting picture correspond to which opposite-number points on the ending picture. Morph then computes a

series of mathematical intermediates between the two pictures. The series may be viewed as a cine film on

the computer screen, but for printing it is necessary to extract a series of still frames--arranged here in order

in a spiral (figure 2). The spiral includes two concatenated sequences: Australopithecus to H. erectus and H.

erectus to H. sapiens. Conveniently the two time intervals separating these three landmark fossils are

approximately the same, about 1.5m years. The three labelled landmark skulls constitute the data supplied to

Morph. All the others are the computed intermediates (ignore H. futuris for the moment).

Swirl your eye round the spiral looking for trends. It is broadly true that any trends you find before H. erectus

continue after him. The film version shows this much more dramatically, so much so that it is hard, as you

watch the film, to detect any discontinuity as you pass through H. erectus. We have made similar films for a

number of probable evolutionary transitions in human ancestry. More often than not, trends show reversals of

direction. The relatively smooth continuity around H. erectus is quite unusual.

We can say that there has been a long, progressive--and by evolutionary standards very rapid--trend over the

past 3m years of human skull evolution. I am speaking of progress in the value-neutral sense here. As it



happens, anybody who thinks increased brain size has positive value can also claim this trend as value-laden

progress too. This is because the dominant trend, flowing both before and after H. erectus, is the spectacular

ballooning of the brain.

What of the future? Can we extrapolate the trend from H. erectus through and beyond H. sapiens, and predict

the skull shape of H. futuris 3m years hence? Only an orthogeneticist would take it seriously; but, for what it is

worth, we have made an extrapolation with the aid of Morph, and it is appended at the end of the spiral

diagram. It shows a continuation of the trend to inflate the balloon of the braincase; the chin continues to

move forward and sharpen into a silly little goatee point, while the jaw itself looks too small to chew anything

but baby pap. Indeed the whole cranium is quite reminiscent of a baby's skull. It was long ago suggested that

human evolution is an example of "paedomorphosis": the retention of juvenile characteristics into adulthood.

The adult human skull looks more like a baby chimp's than like an adult chimp's.

Don't bank on H. futuris

Is there any likelihood that something like this hypothetical large- brained H. futuris will evolve? I'd put very

little money on it, one way or the other. Certainly the mere fact that brain inflation has been the dominant

trend over the past 3m years says almost nothing about probable trends in the next 3m. Brains will continue

to inflate only if natural selection continues to favour large-brained individuals. This means, when you come

down to it, if large-brained individuals manage to have, on average, more children than small-brained ones.

It is not unreasonable to assume that large brains go with intelligence, and that intelligence, in our wild

ancestors, was associated with ability to survive, ability to attract mates or ability to outwit rivals. Not

unreasonable--but both these clauses would find their critics. It is an article of passionate faith among

"politically correct" biologists and anthropologists that brain size has no connection with intelligence; that

intelligence has nothing to do with genes; and that genes are probably nasty fascist things anyway.

Leaving this to one side, problems with the idea remain. In the days when most individuals died young, the

main qualification for reproduction was survival into adulthood. But in our western civilisation few die young,

most adults choose to have fewer children than they are physically and economically capable of, and it is by

no means clear that people with the largest families are the most intelligent. Anybody viewing future human

evolution from the perspective of advanced western civilisation is unlikely to make confident predictions about

brain size continuing to evolve.

In any case, all these ways of viewing the matter are far too short- term. Socially important phenomena such

as contraception and education exert their influences over the timescale of human historians, over decades

and centuries. Evolutionary trends--at least those that last long enough to deserve the title progressive--are

so slow that they are all but totally insensitive to the vagaries of social and historical time. If we could assume

that something like our advanced scientific civilisation was going to last for 1m, or even 100,000, years, it

might be worth thinking about the undercurrents of natural-selection pressure in these civilised conditions.

But the likelihood is that, in 100,000 years time, we shall either have reverted to wild barbarism, or else

civilisation will have advanced beyond all recognition--into colonies in outer space, for instance. In either

case, evolutionary extrapolations from present conditions are likely to be highly misleading.

Evolutionists are usually pretty coy about predicting the future. Our species is a particularly hard one to

predict because human culture, at least for the past few thousand years and speeding up all the time,

changes in ways that mimic evolutionary change, only thousands to hundreds of thousands of times faster.

This is most clearly seen when we look at technical hardware. It is almost a cliche to point out that the

wheeled vehicle, the aeroplane, and the electronic computer, to say nothing of more frivolous examples such

as dress fashions, evolve in ways strikingly reminiscent of biological evolution. My formal definitions of valueladen and value-neutral progress, although designed for fossil bones, can be applied, without modification, to

cultural and technological trends.

Prevailing skirt and hair lengths in western society are progressive--value-neutrally, because they are too

trivial to be anything else--for short periods if at all. Viewed over the timescale of decades, the average

lengths fritter up and down like yo-yos. Weapons improve (at what they are designed to do, which may be of

positive or negative value depending on your point of view) consistently and progressively, at least partly to

counter improvements in the weaponry of enemies. But mostly, like any other technology, they improve

because new inventions build on earlier ones and inventors in any age benefit from the ideas, efforts and



experience of their predecessors. This principle is most spectacularly demonstrated by the evolution of the

digital computer. The late Christopher Evans, a psychologist and author, calculated that if the motor car had

evolved as fast as the computer, and over the same time period, "Today you would be able to buy a RollsRoyce for ?.35, it would do three million miles to the gallon, and it would deliver enough power to drive the

QE2. And if you were interested in miniaturisation, you could place half a dozen of them on a pinhead."

Science and the technology that it inspires can, of course, be used for backward ends. Continued trends in,

say, aeroplane or computer speed, are undoubtedly progressive in a value-neutral sense. It would be easy to

see them also as progressive in various value-laden senses. But such progress could also turn out to be

laden with deeply negative value if the technologies fall into the hands of, say, religious fundamentalists bent

on the destruction of rival sects who face a different point of the compass in order to pray, or some equally

insufferable habit. Much may depend on whether the societies with the scientific know-how and the civilised

values necessary to develop the technologies keep control of them; or whether they allow them to spread to

educationally and scientifically backward societies which happen to have the money to buy them.

Scientific and technological progress themselves are value-neutral. They are just very good at doing what

they do. If you want to do selfish, greedy, intolerant and violent things, scientific technology will provide you

with by far the most efficient way of doing so. But if you want to do good, to solve the world's problems, to

progress in the best value-laden sense, once again, there is no better means to those ends than the scientific

way. For good or ill, I expect scientific knowledge and technical invention to develop progressively over the

next 150 years, and at an accelerating rate.



The Improbability of God

by Richard Dawkins

The following article is from Free Inquiry magazine, Volume 18, Number 3.

Much of what people do is done in the name of God. Irishmen blow each other up in his name. Arabs blow

themselves up in his name. Imams and ayatollahs oppress women in his name. Celibate popes and priests

mess up people's sex lives in his name. Jewish shohets cut live animals' throats in his name. The

achievements of religion in past history - bloody crusades, torturing inquisitions, mass-murdering

conquistadors, culture-destroying missionaries, legally enforced resistance to each new piece of scientific

truth until the last possible moment - are even more impressive. And what has it all been in aid of? I believe it

is becoming increasingly clear that the answer is absolutely nothing at all. There is no reason for believing

that any sort of gods exist and quite good reason for believing that they do not exist and never have. It has all

been a gigantic waste of time and a waste of life. It would be a joke of cosmic proportions if it weren't so

tragic.

Why do people believe in God? For most people the answer is still some version of the ancient Argument

from Design. We look about us at the beauty and intricacy of the world - at the aerodynamic sweep of a

swallow's wing, at the delicacy of flowers and of the butterflies that fertilize them, through a microscope at the

teeming life in every drop of pond water, through a telescope at the crown of a giant redwood tree. We reflect

on the electronic complexity and optical perfection of our own eyes that do the looking. If we have any

imagination, these things drive us to a sense of awe and reverence. Moreover, we cannot fail to be struck by

the obvious resemblance of living organs to the carefully planned designs of human engineers. The argument

was most famously expressed in the watchmaker analogy of the eighteenth-century priest William Paley.

Even if you didn't know what a watch was, the obviously designed character of its cogs and springs and of

how they mesh together for a purpose would force you to conclude "that the watch must have had a maker:

that there must have existed, at some time, and at some place or other, an artificer or artificers, who formed it

for the purpose which we find it actually to answer; who comprehended its construction, and designed its

use." If this is true of a comparatively simple watch, how much the more so is it true of the eye, ear, kidney,

elbow joint, brain? These beautiful, complex, intricate, and obviously purpose-built structures must have had

their own designer, their own watchmaker - God.

So ran Paley's argument, and it is an argument that nearly all thoughtful and sensitive people discover for

themselves at some stage in their childhood. Throughout most of history it must have seemed utterly

convincing, self-evidently true. And yet, as the result of one of the most astonishing intellectual revolutions in

history, we now know that it is wrong, or at least superfluous. We now know that the order and apparent

purposefulness of the living world has come about through an entirely different process, a process that works

without the need for any designer and one that is a consequence of basically very simple laws of physics.

This is the process of evolution by natural selection, discovered by Charles Darwin and, independently, by

Alfred Russel Wallace.

What do all objects that look as if they must have had a designer have in common? The answer is statistical

improbability. If we find a transparent pebble washed into the shape of a crude lens by the sea, we do not

conclude that it must have been designed by an optician: the unaided laws of physics are capable of

achieving this result; it is not too improbable to have just "happened." But if we find an elaborate compound

lens, carefully corrected against spherical and chromatic aberration, coated against glare, and with "Carl

Zeiss" engraved on the rim, we know that it could not have just happened by chance. If you take all the atoms

of such a compound lens and throw them together at random under the jostling influence of the ordinary laws

of physics in nature, it is theoretically possible that, by sheer luck, the atoms would just happen to fall into the

pattern of a Zeiss compound lens, and even that the atoms round the rim should happen to fall in such a way

that the name Carl Zeiss is etched out. But the number of other ways in which the atoms could, with equal

likelihood, have fallen, is so hugely, vastly, immeasurably greater that we can completely discount the chance

hypothesis. Chance is out of the question as an explanation.

This is not a circular argument, by the way. It might seem to be circular because, it could be said, any

particular arrangement of atoms is, with hindsight, very improbable. As has been said before, when a ball

lands on a particular blade of grass on the golf course, it would be foolish to exclaim: "Out of all the billions of

blades of grass that it could have fallen on, the ball actually fell on this one. How amazingly, miraculously

improbable!" The fallacy here, of course, is that the ball had to land somewhere. We can only stand amazed



at the improbability of the actual event if we specify it a priori: for example, if a blindfolded man spins himself

round on the tee, hits the ball at random, and achieves a hole in one. That would be truly amazing, because

the target destination of the ball is specified in advance.

Of all the trillions of different ways of putting together the atoms of a telescope, only a minority would actually

work in some useful way. Only a tiny minority would have Carl Zeiss engraved on them, or, indeed, any

recognizable words of any human language. The same goes for the parts of a watch: of all the billions of

possible ways of putting them together, only a tiny minority will tell the time or do anything useful. And of

course the same goes, a fortiori, for the parts of a living body. Of all the trillions of trillions of ways of putting

together the parts of a body, only an infinitesimal minority would live, seek food, eat, and reproduce. True,

there are many different ways of being alive - at least ten million different ways if we count the number of

distinct species alive today - but, however many ways there may be of being alive, it is certain that there are

vastly more ways of being dead!

We can safely conclude that living bodies are billions of times too complicated - too statistically improbable to have come into being by sheer chance. How, then, did they come into being? The answer is that chance

enters into the story, but not a single, monolithic act of chance. Instead, a whole series of tiny chance steps,

each one small enough to be a believable product of its predecessor, occurred one after the other in

sequence. These small steps of chance are caused by genetic mutations, random changes - mistakes really in the genetic material. They give rise to changes in the existing bodily structure. Most of these changes are

deleterious and lead to death. A minority of them turn out to be slight improvements, leading to increased

survival and reproduction. By this process of natural selection, those random changes that turn out to be

beneficial eventually spread through the species and become the norm. The stage is now set for the next

small change in the evolutionary process. After, say, a thousand of these small changes in series, each

change providing the basis for the next, the end result has become, by a process of accumulation, far too

complex to have come about in a single act of chance.

For instance, it is theoretically possible for an eye to spring into being, in a single lucky step, from nothing:

from bare skin, let's say. It is theoretically possible in the sense that a recipe could be written out in the form

of a large number of mutations. If all these mutations happened simultaneously, a complete eye could,

indeed, spring from nothing. But although it is theoretically possible, it is in practice inconceivable. The

quantity of luck involved is much too large. The "correct" recipe involves changes in a huge number of genes

simultaneously. The correct recipe is one particular combination of changes out of trillions of equally probable

combinations of chances. We can certainly rule out such a miraculous coincidence. But it is perfectly

plausible that the modern eye could have sprung from something almost the same as the modern eye but not

quite: a very slightly less elaborate eye. By the same argument, this slightly less elaborate eye sprang from a

slightly less elaborate eye still, and so on. If you assume a sufficiently large number of sufficiently small

differences between each evolutionary stage and its predecessor, you are bound to be able to derive a full,

complex, working eye from bare skin. How many intermediate stages are we allowed to postulate? That

depends on how much time we have to play with. Has there been enough time for eyes to evolve by little

steps from nothing?

The fossils tell us that life has been evolving on Earth for more than 3,000 million years. It is almost

impossible for the human mind to grasp such an immensity of time. We, naturally and mercifully, tend to see

our own expected lifetime as a fairly long time, but we can't expect to live even one century. It is 2,000 years

since Jesus lived, a time span long enough to blur the distinction between history and myth. Can you imagine

a million such periods laid end to end? Suppose we wanted to write the whole history on a single long scroll.

If we crammed all of Common Era history into one metre of scroll, how long would the pre-Common Era part

of the scroll, back to the start of evolution, be? The answer is that the pre-Common Era part of the scroll

would stretch from Milan to Moscow. Think of the implications of this for the quantity of evolutionary change

that can be accommodated. All the domestic breeds of dogs - Pekingeses, poodles, spaniels, Saint Bernards,

and Chihuahuas - have come from wolves in a time span measured in hundreds or at the most thousands of

years: no more than two meters along the road from Milan to Moscow. Think of the quantity of change

involved in going from a wolf to a Pekingese; now multiply that quantity of change by a million. When you look

at it like that, it becomes easy to believe that an eye could have evolved from no eye by small degrees.

It remains necessary to satisfy ourselves that every one of the intermediates on the evolutionary route, say

from bare skin to a modern eye, would have been favored by natural selection; would have been an

improvement over its predecessor in the sequence or at least would have survived. It is no good proving to



ourselves that there is theoretically a chain of almost perceptibly different intermediates leading to an eye if

many of those intermediates would have died. It is sometimes argued that the parts of an eye have to be all

there together or the eye won't work at all. Half an eye, the argument runs, is no better than no eye at all. You

can't fly with half a wing; you can't hear with half an ear. Therefore there can't have been a series of step-bystep intermediates leading up to a modern eye, wing, or ear.

This type of argument is so naive that one can only wonder at the subconscious motives for wanting to

believe it. It is obviously not true that half an eye is useless. Cataract sufferers who have had their lenses

surgically removed cannot see very well without glasses, but they are still much better off than people with no

eyes at all. Without a lens you can't focus a detailed image, but you can avoid bumping into obstacles and

you could detect the looming shadow of a predator.

As for the argument that you can't fly with only half a wing, it is disproved by large numbers of very successful

gliding animals, including mammals of many different kinds, lizards, frogs, snakes, and squids. Many different

kinds of tree-dwelling animals have flaps of skin between their joints that really are fractional wings. If you fall

out of a tree, any skin flap or flattening of the body that increases your surface area can save your life. And,

however small or large your flaps may be, there must always be a critical height such that, if you fall from a

tree of that height, your life would have been saved by just a little bit more surface area. Then, when your

descendants have evolved that extra surface area, their lives would be saved by just a bit more still if they fell

from trees of a slightly greater height. And so on by insensibly graded steps until, hundreds of generations

later, we arrive at full wings.

Eyes and wings cannot spring into existence in a single step. That would be like having the almost infinite

luck to hit upon the combination number that opens a large bank vault. But if you spun the dials of the lock at

random, and every time you got a little bit closer to the lucky number the vault door creaked open another

chink, you would soon have the door open! Essentially, that is the secret of how evolution by natural selection

achieves what once seemed impossible. Things that cannot plausibly be derived from very different

predecessors can plausibly be derived from only slightly different predecessors. Provided only that there is a

sufficiently long series of such slightly different predecessors, you can derive anything from anything else.

Evolution, then, is theoretically capable of doing the job that, once upon a time, seemed to be the prerogative

of God. But is there any evidence that evolution actually has happened? The answer is yes; the evidence is

overwhelming. Millions of fossils are found in exactly the places and at exactly the depths that we should

expect if evolution had happened. Not a single fossil has ever been found in any place where the evolution

theory would not have expected it, although this could very easily have happened: a fossil mammal in rocks

so old that fishes have not yet arrived, for instance, would be enough to disprove the evolution theory.

The patterns of distribution of living animals and plants on the continents and islands of the world is exactly

what would be expected if they had evolved from common ancestors by slow, gradual degrees. The patterns

of resemblance among animals and plants is exactly what we should expect if some were close cousins, and

others more distant cousins to each other. The fact that the genetic code is the same in all living creatures

overwhelmingly suggests that all are descended from one single ancestor. The evidence for evolution is so

compelling that the only way to save the creation theory is to assume that God deliberately planted enormous

quantities of evidence to make it look as if evolution had happened. In other words, the fossils, the

geographical distribution of animals, and so on, are all one gigantic confidence trick. Does anybody want to

worship a God capable of such trickery? It is surely far more reverent, as well as more scientifically sensible,

to take the evidence at face value. All living creatures are cousins of one another, descended from one

remote ancestor that lived more than 3,000 million years ago.

The Argument from Design, then, has been destroyed as a reason for believing in a God. Are there any other

arguments? Some people believe in God because of what appears to them to be an inner revelation. Such

revelations are not always edifying but they undoubtedly feel real to the individual concerned. Many

inhabitants of lunatic asylums have an unshakable inner faith that they are Napoleon or, indeed, God himself.

There is no doubting the power of such convictions for those that have them, but this is no reason for the rest

of us to believe them. Indeed, since such beliefs are mutually contradictory, we can't believe them all.

There is a little more that needs to be said. Evolution by natural selection explains a lot, but it couldn't start

from nothing. It couldn't have started until there was some kind of rudimentary reproduction and heredity.

Modern heredity is based on the DNA code, which is itself too complicated to have sprung spontaneously into



being by a single act of chance. This seems to mean that there must have been some earlier hereditary

system, now disappeared, which was simple enough to have arisen by chance and the laws of chemistry and

which provided the medium in which a primitive form of cumulative natural selection could get started. DNA

was a later product of this earlier cumulative selection. Before this original kind of natural selection, there was

a period when complex chemical compounds were built up from simpler ones and before that a period when

the chemical elements were built up from simpler elements, following the well-understood laws of physics.

Before that, everything was ultimately built up from pure hydrogen in the immediate aftermath of the big bang,

which initiated the universe.

There is a temptation to argue that, although God may not be needed to explain the evolution of complex

order once the universe, with its fundamental laws of physics, had begun, we do need a God to explain the

origin of all things. This idea doesn't leave God with very much to do: just set off the big bang, then sit back

and wait for everything to happen. The physical chemist Peter Atkins, in his beautifully written book The

Creation, postulates a lazy God who strove to do as little as possible in order to initiate everything. Atkins

explains how each step in the history of the universe followed, by simple physical law, from its predecessor.

He thus pares down the amount of work that the lazy creator would need to do and eventually concludes that

he would in fact have needed to do nothing at all!

The details of the early phase of the universe belong to the realm of physics, whereas I am a biologist, more

concerned with the later phases of the evolution of complexity. For me, the important point is that, even if the

physicist needs to postulate an irreducible minimum that had to be present in the beginning, in order for the

universe to get started, that irreducible minimum is certainly extremely simple. By definition, explanations that

build on simple premises are more plausible and more satisfying than explanations that have to postulate

complex and statistically improbable beginnings. And you can't get much more complex than an Almighty

God!

-----------------------------------------------------------------------Richard Dawkins is Oxford's Professor of Public Understanding of Science. He is the author of The Blind

Watchmaker (on which this article is partly based) and Climbing Mount Improbable. He is a Senior Editor of

Free Inquiry.



The Information Challenge

By Richard Dawkins

Article in The Skeptic Vol 18, No 4 Dec 1998



In September 1997, I allowed an Australian film crew into my house in Oxford without realising that their

purpose was creationist propaganda. In the course of a suspiciously amateurish interview, they issued a

truculent challenge to me to "give an example of a genetic mutation or an evolutionary process which can be

seen to increase the information in the genome." It is the kind of question only a creationist would ask in that

way, and it was at this point I tumbled to the fact that I had been duped into granting an interview to

creationists - a thing I normally don't do, for good reasons. In my anger I refused to discuss the question

further, and told them to stop the camera. However, I eventually withdrew my peremptory termination of the

interview as a whole. This was solely because they pleaded with me that they had come all the way from

Australia specifically in order to interview me. Even if this was a considerable exaggeration, it seemed, on

reflection, ungenerous to tear up the legal release form and throw them out. I therefore relented.

My generosity was rewarded in a fashion that anyone familiar with fundamentalist tactics might have

predicted. When I eventually saw the film a year later 1, I found that it had been edited to give the false

impression that I was incapable of answering the question about information content 2. In fairness, this may

not have been quite as intentionally deceitful as it sounds. You have to understand that these people really

believe that their question cannot be answered! Pathetic as it sounds, their entire journey from Australia

seems to have been a quest to film an evolutionist failing to answer it.

With hindsight - given that I had been suckered into admitting them into my house in the first place - it might

have been wiser simply to answer the question. But I like to be understood whenever I open my mouth - I

have a horror of blinding people with science - and this was not a question that could be answered in a

soundbite. First you first have to explain the technical meaning of "information". Then the relevance to

evolution, too, is complicated - not really difficult but it takes time. Rather than engage now in further

recriminations and disputes about exactly what happened at the time of the interview (for, to be fair, I should

say that the Australian producer's memory of events seems to differ from mine), I shall try to redress the

matter now in constructive fashion by answering the original question, the "Information Challenge", at

adequate length - the sort of length you can achieve in a proper article.

Information

The technical definition of "information" was introduced by the American engineer Claude Shannon in 1948.

An employee of the Bell Telephone Company, Shannon was concerned to measure information as an

economic commodity. It is costly to send messages along a telephone line. Much of what passes in a

message is not information: it is redundant. You could save money by recoding the message to remove the

redundancy. Redundancy was a second technical term introduced by Shannon, as the inverse of information.

Both definitions were mathematical, but we can convey Shannon's intuitive meaning in words.

Redundancy is any part of a message that is not informative, either because the recipient already knows it (is

not surprised by it) or because it duplicates other parts of the message. In the sentence "Rover is a poodle

dog", the word "dog" is redundant because "poodle" already tells us that Rover is a dog. An economical

telegram would omit it, thereby increasing the informative proportion of the message. "Arr JFK Fri pm pls mt

BA Cncrd flt" carries the same information as the much longer, but more redundant, "I'll be arriving at John F

Kennedy airport on Friday evening; please meet the British Airways Concorde flight". Obviously the brief,

telegraphic message is cheaper to send (although the recipient may have to work harder to decipher it redundancy has its virtues if we forget economics). Shannon wanted to find a mathematical way to capture

the idea that any message could be broken into the information (which is worth paying for), the redundancy

(which can, with economic advantage, be deleted from the message because, in effect, it can be

reconstructed by the recipient) and the noise (which is just random rubbish).

"It rained in Oxford every day this week" carries relatively little information, because the receiver is not

surprised by it. On the other hand, "It rained in the Sahara desert every day this week" would be a message

with high information content, well worth paying extra to send. Shannon wanted to capture this sense of

information content as "surprise value". It is related to the other sense - "that which is not duplicated in other

parts of the message" - because repetitions lose their power to surprise. Note that Shannon's definition of the

quantity of information is independent of whether it is true. The measure he came up with was ingenious and



intuitively satisfying. Let's estimate, he suggested, the receiver's ignorance or uncertainty before receiving the

message, and then compare it with the receiver's remaining ignorance after receiving the message. The

quantity of ignorance-reduction is the information content. Shannon's unit of information is the bit, short for

"binary digit". One bit is defined as the amount of information needed to halve the receiver's prior uncertainty,

however great that prior uncertainty was (mathematical readers will notice that the bit is, therefore, a

logarithmic measure).

In practice, you first have to find a way of measuring the prior uncertainty - that which is reduced by the

information when it comes. For particular kinds of simple message, this is easily done in terms of

probabilities. An expectant father watches the Caesarian birth of his child through a window into the operating

theatre. He can't see any details, so a nurse has agreed to hold up a pink card if it is a girl, blue for a boy.

How much information is conveyed when, say, the nurse flourishes the pink card to the delighted father? The

answer is one bit - the prior uncertainty is halved. The father knows that a baby of some kind has been born,

so his uncertainty amounts to just two possibilities - boy and girl - and they are (for purposes of this

discussion) equal. The pink card halves the father's prior uncertainty from two possibilities to one (girl). If

there'd been no pink card but a doctor had walked out of the operating theatre, shook the father's hand and

said "Congratulations old chap, I'm delighted to be the first to tell you that you have a daughter", the

information conveyed by the 17 word message would still be only one bit.

Computer information

Computer information is held in a sequence of noughts and ones. There are only two possibilities, so each 0

or 1 can hold one bit. The memory capacity of a computer, or the storage capacity of a disc or tape, is often

measured in bits, and this is the total number of 0s or 1s that it can hold. For some purposes, more

convenient units of measurement are the byte (8 bits), the kilobyte (1000 bytes or 8000 bits), the megabyte (a

million bytes or 8 million bits) or the gigabyte (1000 million bytes or 8000 million bits). Notice that these

figures refer to the total available capacity. This is the maximum quantity of information that the device is

capable of storing. The actual amount of information stored is something else. The capacity of my hard disc

happens to be 4.2 gigabytes. Of this, about 1.4 gigabytes are actually being used to store data at present.

But even this is not the true information content of the disc in Shannon's sense. The true information content

is smaller, because the information could be more economically stored. You can get some idea of the true

information content by using one of those ingenious compression programs like "Stuffit". Stuffit looks for

redundancy in the sequence of 0s and 1s, and removes a hefty proportion of it by recoding - stripping out

internal predictability. Maximum information content would be achieved (probably never in practice) only if

every 1 or 0 surprised us equally. Before data is transmitted in bulk around the Internet, it is routinely

compressed to reduce redundancy.

That's good economics. But on the other hand it is also a good idea to keep some redundancy in messages,

to help correct errors. In a message that is totally free of redundancy, after there's been an error there is no

means of reconstructing what was intended. Computer codes often incorporate deliberately redundant "parity

bits" to aid in error detection. DNA, too, has various error-correcting procedures which depend upon

redundancy. When I come on to talk of genomes, I'll return to the three-way distinction between total

information capacity, information capacity actually used, and true information content.

It was Shannon's insight that information of any kind, no matter what it means, no matter whether it is true or

false, and no matter by what physical medium it is carried, can be measured in bits, and is translatable into

any other medium of information. The great biologist J B S Haldane used Shannon's theory to compute the

number of bits of information conveyed by a worker bee to her hivemates when she "dances" the location of a

food source (about 3 bits to tell about the direction of the food and another 3 bits for the distance of the food).

In the same units, I recently calculated that I'd need to set aside 120 megabits of laptop computer memory to

store the triumphal opening chords of Richard Strauss's "Also Sprach Zarathustra" (the "2001" theme) which I

wanted to play in the middle of a lecture about evolution. Shannon's economics enable you to calculate how

much modem time it'll cost you to e-mail the complete text of a book to a publisher in another land. Fifty years

after Shannon, the idea of information as a commodity, as measurable and interconvertible as money or

energy, has come into its own.

DNA information

DNA carries information in a very computer-like way, and we can measure the genome's capacity in bits too,

if we wish. DNA doesn't use a binary code, but a quaternary one. Whereas the unit of information in the

computer is a 1 or a 0, the unit in DNA can be T, A, C or G. If I tell you that a particular location in a DNA



sequence is a T, how much information is conveyed from me to you? Begin by measuring the prior

uncertainty. How many possibilities are open before the message "T" arrives? Four. How many possibilities

remain after it has arrived? One. So you might think the information transferred is four bits, but actually it is

two. Here's why (assuming that the four letters are equally probable, like the four suits in a pack of cards).

Remember that Shannon's metric is concerned with the most economical way of conveying the message.

Think of it as the number of yes/no questions that you'd have to ask in order to narrow down to certainty, from

an initial uncertainty of four possibilities, assuming that you planned your questions in the most economical

way. "Is the mystery letter before D in the alphabet?" No. That narrows it down to T or G, and now we need

only one more question to clinch it. So, by this method of measuring, each "letter" of the DNA has an

information capacity of 2 bits.

Whenever prior uncertainty of recipient can be expressed as a number of equiprobable alternatives N, the

information content of a message which narrows those alternatives down to one is log2N (the power to which

2 must be raised in order to yield the number of alternatives N). If you pick a card, any card, from a normal

pack, a statement of the identity of the card carries log252, or 5.7 bits of information. In other words, given a

large number of guessing games, it would take 5.7 yes/no questions on average to guess the card, provided

the questions are asked in the most economical way. The first two questions might establish the suit. (Is it

red? Is it a diamond?) the remaining three or four questions would successively divide and conquer the suit

(is it a 7 or higher? etc.), finally homing in on the chosen card. When the prior uncertainty is some mixture of

alternatives that are not equiprobable, Shannon's formula becomes a slightly more elaborate weighted

average, but it is essentially similar. By the way, Shannon's weighted average is the same formula as

physicists have used, since the nineteenth century, for entropy. The point has interesting implications but I

shall not pursue them here.

Information and evolution

That's enough background on information theory. It is a theory which has long held a fascination for me, and I

have used it in several of my research papers over the years. Let's now think how we might use it to ask

whether the information content of genomes increases in evolution. First, recall the three way distinction

between total information capacity, the capacity that is actually used, and the true information content when

stored in the most economical way possible. The total information capacity of the human genome is

measured in gigabits. That of the common gut bacterium Escherichia coli is measured in megabits. We, like

all other animals, are descended from an ancestor which, were it available for our study today, we'd classify

as a bacterium. So perhaps, during the billions of years of evolution since that ancestor lived, the information

capacity of our genome has gone up about three orders of magnitude (powers of ten) - about a thousandfold.

This is satisfyingly plausible and comforting to human dignity. Should human dignity feel wounded, then, by

the fact that the crested newt, Triturus cristatus, has a genome capacity estimated at 40 gigabits, an order of

magnitude larger than the human genome? No, because, in any case, most of the capacity of the genome of

any animal is not used to store useful information. There are many nonfunctional pseudogenes (see below)

and lots of repetitive nonsense, useful for forensic detectives but not translated into protein in the living cells.

The crested newt has a bigger "hard disc" than we have, but since the great bulk of both our hard discs is

unused, we needn't feel insulted. Related species of newt have much smaller genomes. Why the Creator

should have played fast and loose with the genome sizes of newts in such a capricious way is a problem that

creationists might like to ponder. From an evolutionary point of view the explanation is simple (see The

Selfish Gene pp 44-45 and p 275 in the Second Edition).

Gene duplication

Evidently the total information capacity of genomes is very variable across the living kingdoms, and it must

have changed greatly in evolution, presumably in both directions. Losses of genetic material are called

deletions. New genes arise through various kinds of duplication. This is well illustrated by haemoglobin, the

complex protein molecule that transports oxygen in the blood.

Human adult haemoglobin is actually a composite of four protein chains called globins, knotted around each

other. Their detailed sequences show that the four globin chains are closely related to each other, but they

are not identical. Two of them are called alpha globins (each a chain of 141 amino acids), and two are beta

globins (each a chain of 146 amino acids). The genes coding for the alpha globins are on chromosome 11;

those coding for the beta globins are on chromosome 16. On each of these chromosomes, there is a cluster

of globin genes in a row, interspersed with some junk DNA. The alpha cluster, on Chromosome 11, contains

seven globin genes. Four of these are pseudogenes, versions of alpha disabled by faults in their sequence

and not translated into proteins. Two are true alpha globins, used in the adult. The final one is called zeta and



is used only in embryos. Similarly the beta cluster, on chromosome 16, has six genes, some of which are

disabled, and one of which is used only in the embryo. Adult haemoglobin, as we've seen contains two alpha

and two beta chains.

Never mind all this complexity. Here's the fascinating point. Careful letter-by-letter analysis shows that these

different kinds of globin genes are literally cousins of each other, literally members of a family. But these

distant cousins still coexist inside our own genome, and that of all vertebrates. On a the scale of whole

organism, the vertebrates are our cousins too. The tree of vertebrate evolution is the family tree we are all

familiar with, its branch-points representing speciation events - the splitting of species into pairs of daughter

species. But there is another family tree occupying the same timescale, whose branches represent not

speciation events but gene duplication events within genomes.

The dozen or so different globins inside you are descended from an ancient globin gene which, in a remote

ancestor who lived about half a billion years ago, duplicated, after which both copies stayed in the genome.

There were then two copies of it, in different parts of the genome of all descendant animals. One copy was

destined to give rise to the alpha cluster (on what would eventually become Chromosome 11 in our genome),

the other to the beta cluster (on Chromosome 16). As the aeons passed, there were further duplications (and

doubtless some deletions as well). Around 400 million years ago the ancestral alpha gene duplicated again,

but this time the two copies remained near neighbours of each other, in a cluster on the same chromosome.

One of them was destined to become the zeta of our embryos, the other became the alpha globin genes of

adult humans (other branches gave rise to the nonfunctional pseudogenes I mentioned). It was a similar story

along the beta branch of the family, but with duplications at other moments in geological history.

Now here's an equally fascinating point. Given that the split between the alpha cluster and the beta cluster

took place 500 million years ago, it will of course not be just our human genomes that show the split possess alpha genes in a different part of the genome from beta genes. We should see the same withingenome split if we look at any other mammals, at birds, reptiles, amphibians and bony fish, for our common

ancestor with all of them lived less than 500 million years ago. Wherever it has been investigated, this

expectation has proved correct. Our greatest hope of finding a vertebrate that does not share with us the

ancient alpha/beta split would be a jawless fish like a lamprey, for they are our most remote cousins among

surviving vertebrates; they are the only surviving vertebrates whose common ancestor with the rest of the

vertebrates is sufficiently ancient that it could have predated the alpha/beta split. Sure enough, these jawless

fishes are the only known vertebrates that lack the alpha/beta divide.

Gene duplication, within the genome, has a similar historic impact to species duplication ("speciation") in

phylogeny. It is responsible for gene diversity, in the same way as speciation is responsible for phyletic

diversity. Beginning with a single universal ancestor, the magnificent diversity of life has come about through

a series of branchings of new species, which eventually gave rise to the major branches of the living

kingdoms and the hundreds of millions of separate species that have graced the earth. A similar series of

branchings, but this time within genomes - gene duplications - has spawned the large and diverse population

of clusters of genes that constitutes the modern genome.

The story of the globins is just one among many. Gene duplications and deletions have occurred from time to

time throughout genomes. It is by these, and similar means, that genome sizes can increase in evolution. But

remember the distinction between the total capacity of the whole genome, and the capacity of the portion that

is actually used. Recall that not all the globin genes are actually used. Some of them, like theta in the alpha

cluster of globin genes, are pseudogenes, recognizably kin to functional genes in the same genomes, but

never actually translated into the action language of protein. What is true of globins is true of most other

genes. Genomes are littered with nonfunctional pseudogenes, faulty duplicates of functional genes that do

nothing, while their functional cousins (the word doesn't even need scare quotes) get on with their business in

a different part of the same genome. And there's lots more DNA that doesn't even deserve the name

pseudogene. It, too, is derived by duplication, but not duplication of functional genes. It consists of multiple

copies of junk, "tandem repeats", and other nonsense which may be useful for forensic detectives but which

doesn't seem to be used in the body itself.

Once again, creationists might spend some earnest time speculating on why the Creator should bother to

litter genomes with untranslated pseudogenes and junk tandem repeat DNA.

Information in the genome



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

The evolutionary future of man - A biological view of progress.rtf

Tải bản đầy đủ ngay(0 tr)

×