Sunday, March 29, 2009

Evolutionary Medicine - Draft

Evolution is biology’s great Big Idea.  It is the lens through which facts are interpreted, as well as the vantage point by which new data are judged.  While physicists go on searching for their great unifying Theory of Everything, biologists have essentially had it since Darwin’s theory of natural selection was married with population genetics in the 1930s.

 

Medicine, of course, falls within the biological sciences, but sometimes you’d almost never guess.  This very interesting graphic [“Map of Science”], done in 2007, illustrates the citation flow between disciplines within science.  As you can see, there is much crosstalk between the disciplines of medicine and molecular and cell biology, and a considerable exchange of citations between medicine and neuroscience, but the communication link between evolution and medicine is in fact so tenuous that it isn’t even depicted.

 

Does this matter? Well, evolutionary theory offers the ability to view and answer questions from an ultimate, rather than a proximate vantage point.  Both sets of questions are equally valid, but they are different.  Think, for instance, of the famous case of the sickle cell allele.  The proximate explanation, provided for us by molecular and cell biologists, is of course that this disease results from a single nucleotide mutation in the beta globin gene.  As a result of this, a glutamate is replaced by a valine in the transcribed protein, and the haemoglobin molecule risks insoluble polymerisation at low oxygen tensions. 

 

But to stop there would sell us short.   The ultimate explanation for sickle cell anaemia operates on quite another level.  Significantly, sickle cell disease is an autosomal recessive trait, and to an evolutionist that opens up an entire vista of probabilities.  Why is this damaging recessive allele maintained in the population?  The main part of the answer is well-known: heterozygous individuals on the whole produce enough normal haemoglobin to avoid symptoms, but they are significantly more resistant to malaria.  Thus the recessive gene, far from being eliminated, is maintained in what geneticists call a balanced polymorphism, since the heterozygous phenotype leads to positive selection pressure on the allele, whereas the homozygous phenotype leads to negative selection pressure.  This fact neatly explains the impressive correlation between the historical distribution of malaria [Malaria Distribution] and the ancestral distribution of the sickle cell trait [Sickle Cell Distribution].

 

Randolph Nesse, arguably the leading proponent of evolutionary medicine, illustrated the difference in approaches best, “We are not asking why some people get sick, which is what most medical research asks, but why all humans are vulnerable to disease.”  Natural selection is an enormously powerful force, capable of producing an eye that is sensitive to a single photon, a brain capable of love, hate and innumerous calculations and machinations, and body homeostatic processes that make the mind boggle.  So why do we then still get sick?  And why do we have an appendix, wisdom teeth, a fallible immune system, or an aging body?

 

Nesse proposed a useful list of 6 reasons for our vulnerability [show a slide!], and it’s this schema that I’ll be concentrating on.  My aim is to introduce some of the many evolutionary theories of disease to those who haven’t yet heard them. 

 

The first two categories centre on the idea that natural selection, whilst an enormously powerful force, isn’t necessarily a quick one by our standards.  More specifically, the rate of evolutionary change in a population is inversely proportional to the generation time.  Our generation time is about 20 to 25 years, which only leaves a handful of generations between here and, the dawn of civilisation – not even close to the number needed to effect significant evolutionary change.  The first civilisation is widely believed to be the Sumerian one in Mesopotamia (modern day Iraq), and that started in about 5500 BC.  That’s only about 7500 years ago – roughly 300 generations.  That may still sound like a lot, but that’s only 300 paternal or maternal ancestors you’ve got between yourself and prehistory.  Imagine holding your father’s left hand with your right hand, and continuing this pattern with his father, and his father’s father, etc.  If you each stood a meter apart, the entire ancestral chain wouldn’t even get from here to casualty.

 

Since the dawn of civilisation our existence has undergone some quite dramatic changes with regards to disease burden, labour types, food types, and countless other trappings.  The problem is that 99% of our existence as a species predates this.  The result: there are many parts of our bodies that aren’t well-adapted to the modern world, and this explains the first category of disease: “mismatch with the modern environment.”

 

It is consensus opinion that the rates of allergies and autoimmune diseases have increased markedly in recent times.  The rate of increase far has been so rapid that an environmental cause must be at the bottom of it all.  Numerous candidates have been proposed, and most centre around the “Hygiene Hypothesis”, which claims that it is paradoxically our rather sanitised modern lifestyles that predispose us to these conditions.  Increasingly, however, research is pointing in the direction of the helminths, since cross-sectional studies have shown a consistently negative relationship between helminth infection and allergic diseases.  These results have been confirmed by most, though not all, interventional studies.  Most of the facts begin to line up when you consider that helminths, if they are common today, were much more so in our evolutionary past.  Thus our immune systems evolved with the expectation of a significant helminth load, and the corollary of this is that helminths must have themselves evolved immunomodulatory mechanisms to ensure their own survival.  Take away the helminths, however, and the balance is disturbed.  Of course, it has long been known that the IgE system is intimately involved with both aspects, but specifics are beginning to come to light too.  For instance, in 2007 it was shown that helminths secrete a protein (ES-62) that down-regulates the Type II T cell response.  There is also abundant cross-reactivity between antigens on schistosomes and house dust mites. 

 

Interestingly, the immunomodulatory effects of helminths (or rather the lack thereof in today’s 1st world) have also been linked to several autoimmune diseases.  For instance, one study in 2005 reported that inflammatory bowel disease patients treated with the sterilized eggs of Trichuris suis improved markedly within months (43% improvement for U.C., 72% improvement for Crohns’).  The net has been extended to multiple sclerosis, with a small but well-designed study showing that patients who were recently infected by intestinal helminths had a much, much slower rate of disease progression [MS and Helminths].  The (American) National Multiple Sclerosis Society is presently conducting a phase 2 trial to extend this research.  At present, it must be admitted that the evidence is sketchy, but evolutionary insights should never replace hard data anyway.  Rather, their role is to offer predictions as to where to look, and what to test for.  Theories like the Hygiene Hypothesis can only be helpful, even if they are eventually disproved.

 

The second reason for our vulnerability disease is concerned with the pathogens that coevolve with us.  For instance, since the amount of replication they can do within us is limited, pathogens need an escape route, and they must often temper their virulence in order to meet this requirement.  The well-known Spanish Flu epidemic followed hot on the heels of World War I, and the reason is obvious.  Influenza is usually a severe but not lethal disease; it needs to keep us alive for long enough in order to spread itself.  World War I’s trenches offered another opportunity, though.  Suddenly the virus could replicate as fast as it liked without being in significant peril of being stranded within a dying body: as soldiers got sick, they were simply replaced by healthy ones.  The constant supply of new replication opportunities (read: soldiers) meant that the check on virulence that had previously been in place now no longer applied.

 

A similar thing can be confidently predicted with HIV and condom usage.  Although condoms obviously significantly decrease HIV transmission (Pope Benedict’s heterodox epidemiological studies notwithstanding), their widespread use can also be expected to lead to the generation of less virulent HIV strains.  Again, a virus must be able to spread itself to another host before the host demises, and condom usage means that the average time before it infects someone else will be extended.  It follows that its virulence must therefore be somewhat lessened if it is to survive.

 

Among laypeople, the commonest perceived opinion of why we are vulnerable to diseases is simply that “evolution isn’t strong enough” to rid us of all susceptibilities.  As I’ve said, an evolutionary perspective actually lessens the force of this argument – there are so many other possibilities for disease susceptibility other than natural selection’s impotence.  But is undeniable that even a force as mighty as shapes eyes and brains has its limits.  A nice example comes in the case of deleterious recessive genes.  They will be punished appropriately by natural selection, but they must first be seen.  As their frequency drops within the gene pool, the selection pressure against them actually drops even faster, with the result being that deleterious recessive genes at low frequencies are very hard to get rid of. 

 

Another example of an evolutionary constraint is the case of the appendix.  Most biologists view it as an evolutionary relic (ancestral species would have used a much enlarged version as a pocket to store cellulose-digesting bacteria), but there are some holdouts.  Obviously the appendix does perform some functions (it has a large collection of MALT tissue, for instance), but it’s poor logic to claim that any function derived from a tissue is its reason for existence or persistence.  The matter is settled by the fact that a patient suffers no demonstrable deficiency of function after an appendicectomy.  So why does the troublesome organ persist?  Williams and Nesse offered one ingenious hypothesis, which makes note that the appendix only gives us trouble because of its small size.  When it was its usual large self in our ancestral past, it was presumably no more likely to become occluded than the next part of the large intestine.  Paradoxically, this fact traps evolution, barring it from proceeding any further in the direction of making the appendix smaller, since any decrease in size will actually be punished by increased mortality from appendicitis!

 

Other interesting evolutionary relics reflect the fact that natural selection can only work with what it is given, and so sometimes comes up with bizarre designs which are only illuminated by looking at our ancestry.  The recurrent laryngeal nerve on the left goes on a bizarre walkabout [details from Dawkins/Williams].  … think of how wasteful it is in a giraffe!  Other examples include several awkward anatomical patch-jobs that had to be rushed into production as we evolved bipedality.  [Show list.]

 

The fact that our bodies may be largely bundles of compromises is also counter-intuitive.  Of course, we’re already a little familiar with the idea: the sickle cell allele is a special kind of trade-off, whereby the best option for evolution in some parts of the world is a partial deficiency!  Ageing is perhaps the biggest compromise at all, according to the leading theory of senescence.  The first step in understanding why we age is to distinguish between wear-and-tear and senescence.  Obviously all things are subject to attrition, but the crucial question is not why this is so, but why this isn’t repaired or overcome.  The question isn’t why our molars wear down with age, but why they aren’t replaced.  Agelessness isn’t some hopeless dream; it’s amply displayed in cancer cell lines grown around the world, which show no senescence at all.  Why aren’t we like that?

 

Geneticists had long-since noted an interesting fact: the selection pressure on a gene (either positive or negative) necessarily decreases with age, and that is because even without ageing, we wouldn’t be immortal.  Sooner or later, a lightening bolt, a fall, a rival or (more likely) a microbe would get us, and so theoretical immortality would never translate into actual immortality.  To use an extreme example, therefore, a gene that had an effect at age 999 would not often have the chance to express itself, and so would practically have no selection pressure acting upon it, either positively or negatively.

 

Williams took this idea to the next level with his brilliant theory of “antagonistic pleitropy”.  He first noted that most genes are pleiotropic; that is, they have more than one effect.  He then surmised that pleiotropic genes in which positive effects occurred earlier on in life than their negative effects could be selected for - even if their positive traits were outweighed by their negative ones.  This is because genes with effects towards the beginning of life would be weighted and counted as more significant, since they were under maximum selection pressure.  The result, said Williams (and most researchers since) is ageing.

 

A vivid example from his original paper is that of a hypothetical gene that increased calcium deposition in the bones, but also gradually in the coronary arteries.  Even though enough calcium deposition in the coronary arteries would inevitably be fatal eventually, the selection pressure against this effect would be weak, since it happened late in life when many people would have died from other causes.  It is conceivable that such a gene would be selected for since the strong selection pressure towards stronger bones would ‘outvote’ the weaker selection pressure against coronary artery disease.

 

Such compromises are likely to be legion.  For instance (Steven Pinker quotes).  Draw distinction between this ultimate explanation and the possible proximate free radical explanation.

 

The fact that natural selection maximises genetic replication, rather than health, is a counter-intuitive bar to many people’s understanding.  As Dawkins has famously noted, the genes in our gonads are the only things that are potentially immortal, and this is the unit of natural selection.  Bodies are temporary vehicles that further the replicatory efforts of the genes, rather than the apple of evolution’s eye.  ?Again Dawkins: “[Virus vs Elephant quote]”.

 

Understanding that genetic replication, rather than somatic longevity, is the fundamental aim of evolution, brings into view battlegrounds where we otherwise would never have dreamt to look.  Most famous is Trivers and Haig’s idea of the maternal-offspring conflict, which hinges on the fact that mother and child aren’t clones.  A mother obviously has 100% of her genes in her, but the fetus only carries half this number.  From her genes’ point of view therefore, although the fetus is a valuable bundle of genetic replication, it isn’t the be all and end all.  The fetus, with its own set of genes, reasons differently.  Specifically, the mother’s genes won’t want her to sacrifice as much of her resources to the fetus as the fetus will want.  This [pinker/Williams quote about IGF].  It’s worth noting that this explanation is almost never referred to in medical textbooks.

 

Lastly we have the “smoke detector principle”, which is that so long as the costs are

cheap, the body will often err on the side of extreme sensitivity in detecting and dealing with potential problems, even when the result is mostly false positives.  A smoke detector would rather be wrong 100 times and right about the one true fire than more accurate but miss the fire.  So it is with many of our body’s defences.  The archetypal example is fever.  Although it is usually regarded as merely a troublesome side-effect of infections, the evolutionary evidence says otherwise.  Fever is one of the most conserved elements among higher animals, and is the result of an impressively complicated apparatus centred in the anterior hypothalamus.  The notion that it is a pathological side-effect of bacteraemia is absolute nonsense when viewed from an evolutionary perspective.  The hypothalamic set point has been designed to be sensitive to numerous cytokines involved in the inflammatory process; if this were deleterious on balance, it would be exceptionally easy to prevent.  Just stop the hypothalamus from responding to cytokines!  Furthermore, we have direct evidence that fever interferes with bacterial enzymes, enhances leukocyte mobility and phagocytosis, and promotes the proliferation of T cells.

 

A fair few trials have now been done to see whether treatment of fever might actually be harming the patient. The results are contradictory.  In some cases, aggressive treatment of fever has been shown to increase mortality, whilst other studies have shown neither this nor a benefit.  Why the ambiguous results?  Well, the smoke detector principle.  The body fires all its guns in the hope that enough of them will hit the target, but fever is often one that misses.  Fever seems most effective against bacteria, but the body dutifully raises the hypothalamic set point, just in case, to anything that causes the fairly non-specific cytokines to be released.  Therefore, fever can accompany certain conditions where there is likely no benefit, such as viral infections (e.g. influenza), or inflammation from non-infectious causes (e.g. burns).  Even within the bacterial group, there are likely certain organisms or situations where fever has a greater effect than in others.  However, the smoke detector principle ensures that fever is so non-specific as to make our general studies underpowered.  What is necessary is to conduct much more targeted research.  Does fever help in acute pneumonias, or any bacteraemia?  If so, which organisms does it help most for?  Once we are able to answer questions like these, we will have a very clear idea of when to treat fever (which is undeniably uncomfortable) and when to let it run its course.  The evolutionary viewpoint can guide us in this research by assuring us that there is a point to fever, even if we don’t yet know what it is.

One little anecdote to end.  In my first year studying medicine, I was told by my lecturer that fever is beneficial under certain circumstances.  About a week later I got quite sick and was diagnosed with influenza.  Doggedly determined to stick it out, I declined all treatment, and was confined to my room for three days with a temperature of around 39°C.  I alternated with cycles of shivering and then sweating, and it was one of the worst few days I’ve had over the last decade.  Nonetheless, I was consoled that at least I had least shortened the duration of my illness.  Now there are all sorts of stupid things about that story, but nothing was quite as disheartening as when I did microbiology in second year, and learnt that influenza was caused by a virus.  Fever, thanks to the smoke detector principle, was probably incidental and useless to me.

 

Thank you.

 

 

draft

·        Mismatch – hygiene hypothesis for allergies, autoimmune diseases

·        Pathogens coevolving with hosts – E. coli vs humans (?citrate example) – evolutionary rationale for sex & HLA-discordant partners

·        Constraints on what selection can do (? Appendicitis)

·        Trade-offs (malaria, ageing)

·        Selection maximises genetic replication, not health (mother-child conflicts in the womb à maternal diabetes, GPH, ?my own theory)

·        Smoke-detector principle (my pyretic response to viral influenza)

 

 

Thursday, April 03, 2008

Brotherhood of the Genes

It is certainly no understatement to say that one of the greatest expansion's in evolution's conceptual horizon has been the gene's-eye view, first formally articulated by G.C. Williams, and later amplified and extended by Richard Dawkins. Briefly, evolution from this vantage point is all about the differential survival of certain alleles over their competitors at a particular locus - what matters to evolution is genes, not the individuals that they make. Individuals are thus relegated to 'survival machines' (in Dawkins' famous phrase); far from being central to the selection process of evolution, they are mere servants, protectors of their potentially immortal creators.

It's easy to imbibe this sort of talk (once you've gotten over the shock of it), but it's quite another thing to actually start to thing of evolutionary adaptations in this way. It's still tempting to think of the whale's strong propeller tail as being for the benefit of the whale, but a gene-centred view renders this outlook false. A whale's strong tail is for the benefit of the whale's genes, not for the transient, and ultimately doomed, body they bring about.

Usually, such a correction borders on pedantic, since what is good for the whale is also good for the whale's genes (as in the above case). But it is nonetheless important to bear the distinction in mind, because there are some fascinating situations in which the interests of the body and its genes diverge. As we've stated already, in these cases, we should expect the genes to 'win' - adaptations, whether organs or behaviour, are for their benefit after all. Bodies don't even get a vote.

A typically enthralling and brilliant example is William Hamilton's theory of 'kin selection'. The theory, like so many in evolution, is so ingenious so as to appear almost banal. Yet it is anything but this. Recall that evolution can be anthropomorphised as genes struggling selfishly to replicate themselves. Now, with this in mind, how would the bodies that they created act? Well, in order to facilitate the genes' survival and replication, bodies might often appear to us as self-interested. In most cases, it simply wouldn't make evolutionary sense to program ones own 'survival machine' to work for the exclusive benefit of another survival machine holding rival genes. So I'll bet that, however well-intentioned you are, you don't think about world hunger as often as you do your own. And I'd bet you'd object to the idea of me summarily removing your heart to replace another man's ailing one. So would I, but did you ever stop to consider why?

However, selfish genes don't always translate into selfish bodies. Far from it, and this is where Hamilton's theory enters. When an animal behaves "selfishly" its behaviour is obviously controlled by its brain, the development of which was organised by its genes. But the particular genes in the animal's brain aren't directly replicated, of course - they are a dead-end lineage. The brain isn't passed on to ones offspring! Rather, they are working in the service of copies of themselves sitting in the animal's gonads. With this in mind, I'll let the brilliant Steven Pinker continue the narrative for a while:

But here is an important twist. The genes in an animal's gonads are not the only extant copies of the brain-building genes; they are merely the most convenient ones for the brain-building gene to help replicate. Any copy capable of replicating, anywhere in the world, is a legitimate target, if it can be identified and if steps can be taken to help it replicate. A gene that worked to replicate copies of itself inside some other animal's gonads could do as well as a gene that worked to replicate copies of itself inside its own animal's gonads. As far as the gene is concerned, a copy is a copy;
which animal houses it is irrelevant.


But how on earth could a gene identify copies of itself in another body? Clearly, it can't peer directly into the nuclei of a helpless colleague's cells and do a DNA analysis. What is required is a sort of 'rule of thumb' - a much more visible proxy marker that signals, with at least a reasonable degree of accuracy, that the animal shares some of your genes. The solution, as suggested by the theory's name of course, is to identify ones kin.

A brother shares approximately 50% of your genes by identical descent. So do parents and children. Zooming out gradually, a grandparent shares around 25% of your genes, as does a grandchild and an aunt or uncle, and a cousin shares roughly 12.5%, and so on. So when would it favour your genes (not you, remember) to try to get you to perform a costly action for the benefit of a relative? Obviously that depends on how close the relative is. The closer the relative, the more genes of yours he or she shares, and consequently the greater the benefit, to your genes, from any altruistic act targeted towards his or her body. Or, to approach from the opposite direction, animals should be predisposed to make greater sacrifices to closer relatives. This last insight was pithily summarised by biologist J.B.S. Haldane's remark that although he wouldn't sacrifice himself for his brother, he would gladly do so for either two brothers or eight cousins!

So kin selection fundamentally explains why we sacrifice our money, time and countless other resources for our children, and conversely why we love our parents, why calls to 'focus on the family' are often seen as virtuous, and why farmlands and trades are traditionally kept in the family instead of simply selling them off to the highest bidder. Of course, there are many fine-tunings to the basic theory as I've outlined it above. For instance, the emotion to assist and care for family (i.e. love) is tweaked according to who is likely to live the longer (hence parents give more to their children than they give back)... but such ingenious tinkering is another tale.

Kin selection also has a darker side, and it is this footnote that I wish to explore. I believe that it helps solve the riddle of apnoeas in newborns.

As the word suggests, an apnoea is a period where breathing ceases. (Neonatal apnoeas are not to be confused with the topical "sleep apnoeas" which share the respiratory arrest component, but little else.) Apnoea in the newborn period is a nightmare for parents, doctors and nurses. As the apnoea progresses, the child starts turning blue and the heart, starved of oxygen like the rest of the body, slows down. Sometimes the child starts breathing again spontaneously. Sometimes, as in an intensive care unit, it is noticed by a doctor, a nurse or a machine (in increasing order of likelihood) and the child is resuscitated. And sometimes, the child never recovers its respiration, and dies.

What could cause such a ruinous state of affairs? Well, the first distinctly odd thing about it is that it can be caused by almost any serious condition. So as not to feel so powerless, doctors are trained in some of the more likely causes, but even this list can't be rattled off quickly enough if you're late for an appointment. Forgive the medical jargon, but it is worth having look at how long one list of common causes is (in no particular order): hypoglycaemia, meningitis, septicaemia, asphyxia, pneumonia, hyaline membrane disease, anaemia, cardiac failure, patent ductus arteriosis, hypocalcaemia, hyponatraemia, convulsions, intraventricular haemorrhage, hypothermia, hyperthermia, maternal sedation, airways obstruction. If you've had any medical training, it may not escape your notice that that just about covers all the likely serious conditions a neonate could ever dream of having. This is odd fact number one.

Odd fact number two is that apnoeas are overwhelmingly likely to occur in premature babies. In fact, prematurity alone is probably enough to cause an apnoea. In 'term' (i.e. 9 month gestation) babies, one of the causes above is often found. But in premature babies, you usually search in vain.

The diversity of causes, and the targeting of premature babies, is often explained away by referring in a nebulous way to the supposed "immaturity of the respiratory system". Supposedly, the parts of the brain that regulate breathing are so delicate that even the slightest perturbation seems to cause the system to crash. And presumably, on this view, the respiratory centre is so underdeveloped in very preterm infants that their unexplained cases of apnoea are simply caused by a disturbance that is too mild to detect.

At first glance, this hypothesis seems to fit. It explains why prematurity is such a risk factor, as well as why adults (let alone older children) don't develop apnoeas in response to the same stresses. However, the explanation strikes me as deficient in at least one very important regard, namely that it comes very close to side-stepping the question totally. Even if it were possible to prove "prematurity" of the respiratory centre at all (which it isn't, at least yet), the answer stops abruptly short of answering the obvious follow-up question: why on earth would the respiratory centre be immature?

When you think about it, the "prematurity" answer starts to disintegrate before your eyes. For instance, to pick on another vital organ, the heart has had just as much time to develop as the respiratory centre, yet it is functional enough. In health a newborn's heart may comfortably pump a staggering 40 000 litres of blood around its little body per day. Even the lungs themselves are adequately-enough developed for the neonate's needs after only 34 weeks of gestation (average gestation is about 40 weeks). And the part of the brain that controls the cardiovascular system is apparently quite hardy and "mature". Yet we are asked to believe that the neighbouring respiratory centre simply packs up with the slightest nudge. Why? Basically the nub of the issue is this: if you were designing the body, wouldn't you want the vital respiratory centre online as early as possible? If survival is key, why not ensure that the last system to call it quits is the respiratory system? The child should surely struggle on despite any malady that afflicts it. It should valiantly struggle to breathe no matter what the odds. Shouldn't it?

Ah, but we viewing the scene from the vantage point of an actor that plays no part in this play. We have forgotten the doctrine of the selfish gene - bodies don't matter, except in so far as they ensure the replication of genes, remember? What does this paradox look like to a gene?

If you will pardon a quick diversion, let us approach the question with the help of an analogous case – that of a runt in a litter of pigs. Its prospects for survival are roughly proportional to its size. A pig only slightly smaller than the rest of the litter should struggle on and fight for maternal investment (food and time) as much as the rest of the litter, if not more so. Failure to do so would be silly, evolutionarily speaking, since the consequence for the pig, and the genes it carries, would otherwise be terminal. But what if the runt were very small, so that its chances of survival were extremely remote? Now we must remember kin selection and a gene-centred view. Such a runt would still take as much maternal investment as the rest of the litter, but the odds of this investment going to waste (if the runt dies) are now very high. Like human parental love, perhaps it can benefit its genes most by an act of altruism directed at kin who are likely to share some of its own genes. Indeed, it can, but in this case the sacrifice is the ultimate one: death. As shocking as it may seem, it is simply a matter of logic that there must be a point where the runt's chances of survival are so slim that it would be in its own genes' interests (via its siblings) if parental investment were cut off totally and spread around its brothers and sisters instead. Dawkins put it well in The Selfish Gene:

We might suppose intuitively that the runt himself should go on struggling to the last, but the theory does not necessarily predict this... That is to say, a gene that gives the instruction 'Body, if you are very much smaller than your litter-mates, give up the struggle and die' could be successful in the gene pool, because it has a 50 per cent chance of being in the body of each brother and sister saved, and its chances of surviving in the body of the runt are very small anyway. There should be a point of no return in the career of a runt. Before he reaches it he should go on struggling. As soon as he reaches it he should give up and preferably let himself be eaten by his
litter-mates or his parents.

Does this sound a little familiar? To me it does. Well, not the part about being eaten by ones litter-mates! Rather, about there being a point at which the best thing a very sickly animal can do for its genes is paradoxically to sacrifice itself, thus lowering the burden on those of its genes that find themselves in the bodies of kin. Of course, we don't have litters, but we do share 50 per cent of our genes with our parents and siblings. And to me, the neonatal apnoea phenomenon chimes all too well with this evolutionary 'runt' explanation. Apnoeas in newborns afflict exactly those babies who are most likely to die anyway - the very sick and the very young. There is no other obvious link between the massively diverse group of conditions that can cause it. Even the manner of the respiratory centre's demise is odd. No one is arguing the the respiratory centre is too immature to control respiration - all these children breathe fine at first, often for days. It's just that when things start to go very badly, that part of the brain throws in the towel. It's almost as if it starts working, takes a look around, assesses the situation, and makes a literally life or death decision. And we have already noted the absence of a cogent alternative explanation. So, could it be...?

Of course, such an explanation is still vulnerable to many of the criticisms levelled at 'adaptionism', which in its caricatured form is the belief that every aspect of an animal has some evolutionary advantage. For instance, it may be that the brain's respiratory centre is temporarily rendered vulnerable as an unavoidable consequence of some even more important goal - say, immune system development. However, for the moment at least, no such explanation is forthcoming. And there is at least one reason to doubt such a hypothesis in general: not many things are more important to develop than the brain's respiratory centre!

Finally, it is worth reiterating Hume's often forgotten principle that you can't derive an 'ought' from an 'is'. Evolution describes the world as it is, in all its amoral indifference to us. But we are in a position to temporarily overthrow nature's will. It may be 'natural' for very sickly children to effectively attempt suicide, but that doesn't remotely imply that we should put up with it. With modern medicine, we are throwing nature's slow calculations out of kilter. We can now save many of those who would otherwise have died, and in the process we are rendering evolution's equations obsolete. A very sickly child is no longer necessarily confined to the coffin - a simple course of antibiotics is often enough. Even the 'urge' towards apnoeas can often be attenuated by a simple dose of... caffeine. Caffeine? Yes, indeed - that's apparently all it takes to undo the work of millennia[1].


[1] Which is further circumstantial evidence, of course, that the apnoea response is 'designed', rather than unavoidable.

Friday, September 21, 2007

Faith and Teapots

Why isn’t it rational to believe in unicorns? Why shouldn’t we waste our time investigating whether the (actual) tabloid headlines “Noah’s ark found on Mars!” or “Hillary Clinton dating a space alien” are true? How do we draw some sort of line between what is reasonable to believe true and what clearly isn’t?

If you are anything like me, you may marvel at the achievements of science. I don’t mean things like cars and DVD players, as impressive as they are. Rather, it is the ability of science to force wider our conceptual horizons and show us ideas we never would have dreamt possible. I think, for example, about Einstein’s wondrous theories of relativity (which show that a baseball hurled towards a batter ages more slowly than the pitcher does and actually gets heavier in the process). Or the modern view of evolutionary biology, where evolution’s battles are played out at the level of the gene, and bodies are most properly viewed as the gene’s survival machines – programs enabling them to be replicated. And if you seek mystery, there is nothing so bizarre and confounding as the world of quantum physics, with particles appearing and disappearing for no apparent reason, all the while simultaneously appearing a mind-wrenching duality of both wave and particle.

These ideas are as counter-intuitive as they come; in fact I find them even more bizarre (and far more interesting) than the tabloid headlines I quoted. And yet, scientists believe them. Now at this point you may well be asking what the difference is. How is it that a scientist is entitled to his ghostly particles, whereas, following the headline, anyone hopefully swivelling his telescope towards Mars should be treated with derision?

The difference is proof. Science starts with its default position at scepticism[1] – that is, with disbelief. If there were no proof, a scientist would know nothing. But from this position of avowed cluelessness, science (ideally, at least) accepts under its wings those claims for which there is sufficient evidence. If science had a spokesman, she would no doubt be endlessly repeating to us recalcitrant humans that disbelieving a claim is the correct starting point on the road to truth. If anything is to be believed, there should be a reason for believing it.

To my ears though, to stop here would leave us with little more than unjustified rhetoric. After all, why should I only believe something if there is good evidence for it? Why not shun the scientific method and adopt a perversely opposite approach, believing something unless it is disproven? Incidentally, this is more than an interesting aside, for we get ourselves into quite a philosophical quagmire if we say that everything that is believed should be justifiable – and then refuse to justify this statement.

But the riddle does have a solution, and it is fiercely interesting. What we are really looking for is a method for limiting inaccuracy. In other words, if I have a statement – we could call it ‘X’ – does it have the same odds of being wrong as if does of being right? If it does, whether we choose to start with acceptance or start with scepticism would not be a logical matter, for neither would have any advantage. But if the odds are stacked in a particular direction, things start to get interesting. If any statement is more likely to be correct than incorrect, then (in the absence of any other data that might sway us) the best bet would be to believe it and change our minds only if it is proven to be untrue. We would be right more times than we would be wrong this way. On the other hand, if an isolated statement is more likely to be wrong than right, our default position should then be disbelief, again because it would guarantee correctness on a greater number of occasions.

The situation is really a bit of logical gambling. Provided that our given statement is isolated – that is, that there is no predisposing external reason for believing in it or spurning it[2] – then we have no way of knowing whether is will turn out to be true or false. But the shrewd gambler works the odds. Given a statement, the question confronting him is really whether he should put money ‘true’ or ‘false’. And the answer to that will depend on whether, all things being equal, the odds are skewed towards statements being either true or false.

So which is it? One method might be to count the number of true statements and compare this number with the number of false statements. Clearly, if there are, say, only a hundred true statements about the world, and a hundred million false ones, the odds of any particular statement being true would be… well, the proverbial one in a million. Placing my money on any particular statement being true could well be considered diagnostic of insanity – I would most likely stay richer longer if I piled all my money in the middle of your room and set fire to it. (Not a sensible bet then.)

Unfortunately such a theoretically easy solution is unavailable to us. The logic only works if there are a finite number of either true or false statements, even if this number is too great to actually count. Alas though, it turns out that there are an infinite number of both statements either way. The easiest way to illustrate this is to use numbers, which are a readily understandable built-in source of infinity.

First take the false statements. ‘1 + 1 = 2’ may be correct, but ‘1 + 1 = 3’ isn’t. Nor is ‘1 + 1 = 4’ or ‘1 + 1 = 5’. And since the number line goes onward infinitely, one can simply keep adding 1 to whatever ‘1 + 1’ isn’t for a new and equally spectacularly false answer!

With a slight modification, the same trick can prove that there are an infinite number of true statements. Instead of continually adding 1 to the answer column, just add it to one of the other columns, and adjust the answer accordingly:

‘1 + 1’ may only equal ‘2’,
but ‘1 + 2 = 3’ is just as true,
and so is ‘1 + 3 = 4’…

You don’t have to use numbers to prove the infinities, although it is easier. To prove false infinities in another manner, you could say, for example, “That [pointing to a computer] is spelt ‘COMPUTER’”. The statement would obviously be true, but there are an infinite number of wrong attempts. One could simply continue to add a letter: COMPUTERA, then COMPUTERAA, then COMPUTERAAA, and so on.

So our first attempt is destined for failure. Should we abandon the attempt here? Should we accept that our scepticism when confronted with fairies, unicorns, trolls and the like is just a matter of temperament, devoid of any foundation? No! Help is at hand! Something may just have caught your eye in all this – a loophole through which to squeeze our way out of our predicament. Even though there are definitely an infinite number of both true and false statements, the two proofs are not simply mirror images of each other. For the subject ‘1 + 1 = …’ there is only one possible right answer and an infinite number of false answers. But to prove that there are an infinite number of true answers, we had to change the subject. This slight of hand obscured a weakness – a weakness that provides an opening for our next assault.

What happens if we ‘constrain’ the subject? By this I mean to take the subject of the phrase (which in the above example corresponds to ‘1 + 1 = ’) and lock it down, so that, unlike the answer, it is not free to vary. Now what is the ratio of true statements to false statements when the subject is constrained? Clearly, there is only one true statement but an infinite number of false ones! So the ratio is set nicely at 1:infinity (1:µ), which is – by definition – as poor odds as you are ever capable of laying eyes on. And from this insight it is only a small jump to understand why one’s initial position should be disbelief if one is to be at all rational about it. No gambler could be so stupid as to open with credulity.

It isn’t always quite this cut and dry, naturally. Take the example, “Bob is …” which we shall take to be the constrained subject. Now there appear to be many true answers, for example:
… a living organism
… a mammal
… a vertebrate
… stupid

However, if this appears to be a problem, then one often simply hasn’t been specific enough about the subject. Language has had a strained relationship with logic in the last century or so. In this case, the problem is that language can be a kind of shorthand conglomeration of many subjects, and which particular subject is implied may only reveal itself when the rest of the sentence plays itself out. Starting with “Bob is …” I have no idea of what the right answer is, whereas if we unpack the enmeshment, things start getting clearer, if much less aesthetically pleasing. “Regarding Bob’s status as a living or non-living thing, Bob is … a living organism” corrects this somewhat (although it sounds truly awful). In this reformulated subject, the answer “stupid” would be wrong. In a similar vein, “As regards Bob’s intelligence, with stupid being defined as an IQ of less than 80, Bob is … a living organism” sounds vaguely like an insult, but nonetheless joins the infinite list of incorrect answers.

Language’s ability potential to confuse can reveal itself later in the phrase too. For example, to answer “a living thing” instead of “a living organism” is really a similar same trick played over again – this time in reverse. This time, language is being too expansive, and toying with different words for the same answer altogether. I wasn’t asking whether he was a “thing” or an “organism” (which might be dead), I was asking whether he was living or non-living. The fact that other words can be used to describe the same concept is neither here nor there, or an answer in another language would have to count as a different answer. The point is that, disregarding language’s red herrings, there is still only one correct answer to each of these questions. If you are getting more than one plausible answer, then it is the question that isn’t tight enough. For instance, if “living thing” versus “living organism” bothers you, just phrase the question as pedantically as, “Which of the two phrases ‘living organism’ or ‘non-living thing’ best describes Bob?”

A slightly more fundamental limitation is that the ratio of true:false statements can be effortlessly reversed with a minus sign. To show this, let us start with another example “I am a South African citizen” (correct) and “I am an Italian citizen”, “I am an American Citizen”, “I am an Iraqi citizen”, etc. Again, there is an infinite number of wrong answers (192 plus answers like ‘stone’ and ‘no’) - enough, one would hope, to reassure us that the best default position would again be scepticism, unless attended to by proof. What if I were to introduce a negative here, though? Now, the statements become “I am not a South African citizen” (incorrect), and “I am not an Italian citizen”, “I am not an American citizen”, “I am not an Iraqi citizen”, etc. In this (negative) case, the odds are reversed, and the odds of a phrase being false are now 1:µ. Is this a serious challenge to our progress so far? Not really, we just need to rephrase the central theorem as being “Any positive statement is considered untrue unless proven otherwise”. It is worth pointing out that, far from being a challenge our original idea, this is really the same proof in reverse – once again, it shows that the best starting point is scepticism, whether it is disbelieving a positive point, or believing a negative one.

The only time that there genuinely is more than one correct answer is when the question deals in relativity, exemplified by the phrase ‘X is greater/smaller/ more than/less than/etc. Y’. “Lucy is shorter than 2 metres tall” is true, and you will hopefully easily see that she would also be shorter than 3 metres, and 4 metres, and 5… There are, of course, still an infinite number of wrong answers too, like “Lucy is shorter than 1 metre tall” and “Lucy is shorter than 1.01 metres tall”, and “Lucy is shorter than 1.001 metres tall”, etc. These questions require a range of answers, so we have to give it to them. Alas, with this range comes the infinity problem again, and so the theorem can’t be applied in either direction.

Lastly, it should be obvious that the theorem applies only to statements where there is a right and wrong answer. Questions of subjective taste and evaluation (“The play was good”, “Apples are nice”) can never be adjudicated on objectively.

Minor objections aside, so far we have nonetheless showed that if the subject can be ‘constrained’, then it is best (usually by the biggest possible margin) to bet on a statement being false, rather than true. Furthermore, by our original logic, we have showed that this most rationally translates into the strategy of requiring proof for statements if they are to be believed. There must be good reasons for believing something, or else the statement is all but guaranteed to be wrong. But when in real life is the subject constrained? The answer is almost all of the time! Whenever we say, “A dog has four feet” we don’t permit any accusers to retort that we are wrong because a millipede (a different subject) doesn’t. No, if a statement is to be opposed, the subject must be constrained. Similarly, to condemn the statement, “South Africa’s first democratically elected president was Abraham Lincoln”, we can’t respond that Europe is small by continental standards. We would (accurately) be accused of lunacy, for the essence of an argument is to oppose the commentary made on a fixed subject – to talk simultaneously of two different things (the equivalent of not constraining the subject) is to argue like a madman.

Incidentally, this also illuminates in a slightly mathematical way just how important definitions are. If what we mean by the word “love” is different, then our argument will almost certainly be fruitless. We would revert to the problem of unconstrained subjects, where there are an infinite number of potentially correct answers, causing debate to be futile. Not having the same definition of the subject in mind is really just a less dramatic way of talking in two unrelated languages – and getting upset when the other man doesn’t talk yours!

The ‘divergent definitions’ problem aside, the understanding that disbelief is the only rational stance for proofless claims generates some massive implications. For one, this settles the irritating little debate that goes something like this: “You can’t prove me wrong, therefore I’m entitled to believe it”. No – not anymore. We have just shown that your position is next to worthless unless you can back it up. If this still seems a bit theoretical, then let me hand over to the philosopher who wrote the best prose of all, Bertrand Russell. In 1952, he famously noted:

If I were to suggest that between the Earth and Mars there is a china teapot revolving around the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be observed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be though to be talking nonsense.

It sounds so obvious, doesn’t it – so rational. But carry this logic forward into that territory traditionally given a free pass from logic – religion – and you may get quite a jolt. Continuing, Russell notes, not without a trace of bitterness:

If, however, the existence of such a teapot were affirmed in ancient books, taught as truth every Sunday, and instilled into the minds of children at school, hesitation to believe in its existence would become a mark of eccentricity and entitle the doubter to the attentions of the psychiatrist in an enlightened age or of the Inquisitor in an earlier time.

Russell’s ‘celestial teapot’ has become a cult classic amongst atheists, and has recently been joined by other sarcastic inventions along the same line, including like the Flying Spaghetti Monster (“may you be touched by His noodly appendage”) and the Invisible Pink Unicorn. All are deliberately made both as ridiculous and as un-disprovable as possible, and clearly illuminate the madness of treating unjustified claims with credulity. The point, again, is this: without justification, any concept – including that of a god – should be treated with the same scepticism and scorn as Russell’s celestial teapot. And if there is any reason to estimate God (or anything else) as being more likely than this, then let’s hear it, for those reasons would fall quite comfortably into the realm of either logic or science. Needless to say, neither field has proved at all accommodating.

Naturally, the ambit is not limited to religion. Along other targets I would push towards the firing line are homeopathy (scientifically proven not to work), the ‘power’ of prayer (likewise the subject of several withering analyses), ‘magic’ crystals (often ‘blessed’ by the local shaman), most ‘alternative’ health schemes (a few may work, but virtually none bother testing their extravagant claims), all religious texts, John Edward and anyone else who claims they can talk to dead people, and many and much more.

Perhaps I should step down off my soap box now, lest this turn into a moan. Let us instead return to our original observation – that science has progressed an amazingly far amount despite it’s eternally cocked “Spockian eyebrow of doubt” (to use Natalie Angier’s wonderful phrase). But that isn’t quite the right way to phrase it – science’s success is not ‘despite’ such initial scepticism, it’s because of it. Instead of guessing, intuiting, divining or accepting the voice of tradition or authority, science has chosen the most gruelling method of all. And the much harder task eventually pays off, with rich rewards indeed.


[1] ‘Scepticism’ (or ‘skepticism’) is technically a branch of philosophy centred upon the idea that absolute knowledge is impossible. For the purposes of this essay, I will use the term as it is meant in conventional English today – as the opposite of credulousness.
[2] Perhaps this needs a little clarification. If I face the statement “lions have whiskers” armed with the knowledge that lions belong to the cat family, members of which generally have whiskers, then this knowledge represents what I mean by a ‘predisposing external reason’. These are really just clues that would (and should) sway us in our assessment of the truth of the original statement. An isolated statement has no such clues to help us, as in “The Andromedean species of Xweuirts have blue eyes”…

Wednesday, July 25, 2007

Menopause: when being a mother isn't worth it anymore

“I’ve had enough of you children.” – Mother Nature


To many women, menopause is a dreaded accompaniment of age. Yet so natural to our state of being is it that we scarcely realise that it need not necessarily be so. We may go our entire lives without even realising that there is question lurking here: “Why do women undergo menopause?”

Some authorities have argued that the phenomenon does not, in fact, require a special explanation. A women is only born with a certain number of follicles (egg cell plus surrounding support cells) in her ovary, from which several (6-12) are chosen each month to produce oestrogen. Halfway through the monthly cycle, the egg from the dominant follicle is expelled and launched towards where a hopeful sperm cell might arrive, while the other follicles degenerate. The following month, the same process starts again, with the same result. Thus there is a continual loss of egg cells, and since the ovaries, unlike the testes, don’t produce new egg cells, eventually the number of eggs in the basket must eventually run out. This usually happens somewhere between 45 and 55 years of age, and the symptoms are readily explicable – infertility (since no eggs are expelled) and symptoms of oestrogen deficiency (hot flushes, mood instability, dryness and thinning of the lower urinary and genital tracts, osteoporosis, etc.). So women simply suffer the menopause because they run out of ovarian follicles as they age and use them up.

But to argue like this misses the point. Every aspect of the design is up for grabs – why should we simply accept that in a woman, no new egg cells can be created, whereas a man suffers no such threat to his posterity? Or even if there is some abstruse reason for this, why could women not be equipped with, say, 93 billion egg cells – enough that they’d never run short? What exactly is evolution doing here? On first glance, it would seem an utterly ruinous evolutionary strategy to be programmed to abandon all hope of producing descendents at a certain age. Men seem to have ‘realised’ this, after all – a man’s fertility simply fades gradually as the years march on, in line with the general deterioration in all bodily functions that accompanies senescence. Apparently, the oldest female to give birth was 63, yet men can regularly impregnate women at this age, and there are numerous reports of men in their 90s fathering children. Another puzzling aspect of the whole process is that menopause is confined to the tiniest minority of species – there is controversy, but according to the strict definition of menopause I favour, only one other species suffers it: the short-finned pilot whale! Even if the definition is expanded, less than 0.001% of species that could experience the phenomenon do so.

There are a few contenders to explain menopause, but in my view there is one theory that comfortably stands head and shoulders above the rest, and it is brilliant. It’s sometimes called the “grandmother theory”. The first step is to understand that while there are obvious evolutionary benefits, there is also a cost (or perhaps more accurately a risk) to bearing and giving birth to a child. In this distinctly unnatural modern world of antibiotics, ultrasound, science-based health carers, pharmacological cures, and caesarian sections, there is still a greater than a 1% chance of the average woman dying from any single childbirth in many countries[1]. This rate must have been enormously higher in our (evolutionary) past. Clearly if the woman dies, the chances of her creating more offspring are zero. Furthermore, we are a species with an incredibly long dependency on our parents, a point I was reminded of by Jared Diamond’s superb essay on this topic[2]. In his own words,

[H]uman hunter-gatherers acquire most food with tools (digging sticks, nets, spears), prepare it with other tools (knives, pounders, huskers), and then cook it in a fire made by still other tools. Furthermore, they use tools to protect themselves against dangerous predators, unlike other prey animals, which use teeth and strong muscles. Making and wielding all those tools are completely beyond the manual dexterity and mental ability of young children. Tool use and toolmaking are transmitted not just by imitation but also by language, which takes over a decade for a child to master. As a result, human children in most societies do not become capable of economic independence until their teens or twenties.


The awful consequence of this is that if a mother should die as a result of pregnancy, she may well inadvertently kill off some of her other children too, should they not have reached the age where they can look after themselves. Worse still, the mother doesn’t have to die to effect these changes; any pregnancy-induced pathology that severely limits her activities of daily living will do just as well. Strokes, necrotising infections and kidney diseases are among pregnancy’s legacies that, while not killing the mother, will kill her children just the same.

The next step is to understand other ways to be evolutionarily successful without giving birth and raising children. Here it gets interesting – recall that evolution cares about genes, not individuals. Genes make bodies only in order to propagate themselves, recalling the saying that “a chicken is the egg’s way of making another chicken”. From an evolutionary point of view, creating offspring is a very productive way of benefiting the genes, since exactly 50% of your genes are present in each child. (This is the highest possible degree of relatedness that two individuals can share in a sexually reproducing species like our own, apart from identical twins.) But looking after a child is only a special case of helping one’s own genes replicate. Your genes will be present in your children’s children too: your grandchildren each carry an average of a quarter of your genes. Thus, another way of helping my genes would be to help my grandchildren survive. Similarly, a percentage of my genes will also be present in other relatives: brothers, cousins, and the like. Theoretically, I could expend whatever ‘parental investment’ (food, skills-training, shelter, etc.) I would have given to my child on my grandchild instead. Usually however, this would still be a bad strategy, since they only carry half as many of my genes as a child would, and I only have a fixed quantity of ‘parental investment’ to spend (there are only so many hours in a day, or so much food I can gather, etc.). Thus, all things being equal, a child is ‘worth’ more to me than a grandchild, but we have at least been shown another, less potent way of being evolutionarily successful. This will come in handy later.

The final step in our chain of reasoning is to understand how these variables change with age. Perhaps the most vital statistic is that the risk of childbirth increases tremendously as the years go by. For instance, a study in Saudi Arabia[3] showed that the maternal mortality rate increased by a factor of 10 between pregnant women less than twenty years old and pregnant women older than forty. Older pregnant women are at increased risk of the lethal pregnancy-induced hypertension, tears to the birth canal, and postpartum haemorrhage, to name but a few. All this is exacerbated by the fact that, even when controlled for age, the evolutionary risk of pregnancy would often still increase. This is because even with the risk to her kept constant, she is putting more at stake – all her young children – each time she falls pregnant again. And then to rub salt in the wound, a child of an older woman is also far less likely to survive – there is a higher incidence of virtually every obstetric complication, including miscarriage and congenital abnormalities. Thus, the average gain in falling pregnant is also diminished in an older woman.

So, with all the pieces at hand, how does nature put the puzzle together? Basically, the idea is this. When the woman is young, nature ‘permits’ her to fall pregnant. At this age, the costs of pregnancy are relatively low and the benefits are simply too juicily seductive (a high yield propagation: a child) to ignore[4]. But with age, the risks of pregnancy increase for the genes, and the benefits decrease. For many years, the net equation will still favour having children, since on average the genes will be most benefited this way. However, there comes a point when the risks of pregnancy to our female ancestors’ genes plus the lure of alternative methods of evolutionary success outweigh the benefits of further progeny. Now, genes for shutting down the fertilisation mechanism at this point would tend to prosper, since they would out-compete genes that were making bodies indefinitely fertile. Via this brilliant switch, menopause actually becomes evolutionarily desirable. To paraphrase Diamond’s words, menopause comes about because a woman’s apparently counterproductive evolutionary strategy of making fewer surviving gene copies via children actually results in her making more copies overall. “Evidently, as a woman ages, she can do more to increase the number of people bearing her genes by devoting herself to her existing children, her potential grandchildren, and her other relatives than by producing yet another child.”


What an ingenious argument! We can even now tie up loose ends. Why don’t men get menopause? The answer is fairly obvious given the preceding litany of mortal potholes in the pregnancy road: men don’t give birth. More accurately, men bear very little intrinsic cost in making a child: they don’t grow a baby for nine months from food that went into their own mouths, they don’t risk dying during childbirth, and they usually provide less for the baby after its birth too. Seeing that the woman inevitably carries most of the obligatory costs of pregnancy, a man’s genes never face the scenario where the risks of pregnancy outweigh its benefits. It virtually always pays a man’s genes to keep fathering sons (with younger women if necessary), rather than to forgo this bountiful method of genetic replication and concentrate exclusively on other watered down methods of evolutionary success.

And why don’t most other female animals have menopause? There are at least two cogent reasons for this. Firstly, most other animals don’t have nearly as high a risk of mortality in childbirth. One reason for this is that our big brains are so evolutionarily favoured that our bodies delay labour until the last possible moment to allow our brains more time to grow. The obvious downside to this is mechanical: in around 1% of cases, nature gets it a little wrong, and the baby’s head doesn’t fit out the mother’s pelvis. Often, the result is the worst of possible consequences for the mother – death – and this must have been especially true in the days before caesarian sections. The second reason why animals don’t usually get menopause is that there are often no viable alternatives to childbirth available to them. Clearly, if I plan to assist my genes by helping out my grandchildren and cousins, et al., I will be stymied if they’re not around. Many animals do not live in family groups as large and interconnected as we do. Solitary animals like the sloth have to continue to produce children or else exit the game altogether, whereas we have other options.

That all these factors are taken into consideration may boggle the mind unaccustomed to evolutionary methods. Yet in many respects, the Blind Watchmaker that Darwin discovered has it easy – a variety of options are randomly thrown up in the form of mutations, and the best of these tends to prevail, no matter how complicated the reason for its success. It is left to us to wipe away the dust hiding our evolutionary past and marvel at life’s grandeur.



[1] According to estimates from WHO, UNICEF and UNFPA in 2000, Sierra Leone had the highest maternal mortality rate (2000 per 100 000), followed closely by Afghanistan (1900 per 100 000).
[2] http://discovermagazine.com/1996/jul/whywomenchange817
[3] http://www.kfshrc.edu.sa/annals/154/94181/94181.html
[4] Actually, this must obviously be true of any successful species at some stage – at least some individuals must reproduce, for if there were no reproduction, not only would there be no offspring, but there would also be no extended family to help out as an alternative strategy!

Saturday, June 16, 2007

From Whence Comes Evil?

The question, “From whence comes evil?” was framed by, among others, the Gnostics of the first few centuries A.D. It remains a question unsatisfactorily answered today. Every one of us has been evil’s victim on countless occasions, and as questioning beings, we are entitled to ask, “Why?”

My own attempt would begin with the assertion that evil is a value judgement, not an objective quality. My reasons for this have been set out elsewhere and need not bore the reader here. Thus, in a one sense, we are required for ‘evil’ to exist, as without a conscious judger there could be no such thing. But this is a little misleading, since it suggests that we are evil’s author and creator. Thus it is more helpful to think of us as adjudicators of which things are bad and which things aren’t. More strictly, despite the fact that evil is entirely subjective, its subjective reality is not diminished by this in the slightest. I am perfectly capable of suffering from bereavement, feeling pain, and the like, and to claim that this evil is obviated just because it is subjective is a grave misunderstanding. Thus, without contradiction, it is possible to rephrase the question more elegantly as, “Why do we suffer?”

If, like me, you do not subscribe to the bearded white male in the sky, possessor of supreme omnipotence, omniscience and benevolence, then you do not have a problem with the answer to this. Nature is a blind force, despite its magnificence, which cares not the slightest for you or I, and goes neither out of its way to harm nor help us. Thus it is hardly surprising that we may fall foul of it on occasion. Bad things happen because… well, why shouldn’t they? It is inevitable that, from time to time, what we want and what nature does will clash.

Conventional God-is-all-good theology has a problem, however - a big problem. In fact, it is a problem that shakes the very foundations of conventional theology, for it turns out that the triumvirate of omniscience, omnipotence and complete benevolence are logically unsustainable when confronted with even the most basic of facts about the world. In all its glory, then, the problem is this:

If God is totally good, he can’t want us to suffer. If he is omniscient, he knows that we suffer. And if he is omnipotent, he is capable of stopping the suffering. Yet we suffer. Therefore, God can’t be all of these things.

As I said, those of us who do not believe in this God nonsense have no such problem – nature is neither omniscient nor omnipotent. And it is certainly not supremely benevolent. Tennyson charged nature with being “red in tooth and claw”, and Darwin himself remarked to a friend:

“What a book a Devil’s chaplain might write on the clumsy, wasteful, blunderingly low and horridly cruel works of nature.”

But this description would not do for the Divine One, and so conventional theology has offered some tawdry ‘rebuttals’. In order of increasing desperation, I have labelled them the “we actually did it” argument, the “It serves us right” argument, and the “It’ll make a man of you” argument. There is also the perennial religious answer to any argument out of its depth, and I have labelled it the “ostrich” argument. So with this forewarning, take a deep breath and read on…


1. The ‘We Actually Did It” argument
Subscribers to this jaundiced view are of the opinion that we (human beings) are the cause of all the cosmos’ evil. God, being supremely good, created a wholly good world without any evil. He also gave us free will. And it is from our choosing to do wrong things (in particular, eating an apple) that all evil stems.

The most fundamental objection is that there doesn’t seem to be any rational case for saying that we have free will! Once again, though, I have covered this enormously interesting ground elsewhere, and must refrain from repeating myself. Suffice it to say that if the notion of free will is illusory, then this argument is left in tatters.

And even if we did have free will, it seems highly doubtful that an omnipotent creator should be incapable of creating free agents of such moral fibre that they choose to do only good. After all, we all choose good some of the time, and some do so more than others. Why not extrapolate this principle and simply make free-willed creations so perfect that they do only good? It would certainly save on the damnation. After all, this is the claim made for Jesus, his son and creation. Why not make us that way, then? Unless you mean to contend that Jesus wasn’t free…? Or, even if humans were left (and you know by Whom) with a permanent predisposition to do some evil, the rest of us could certainly be insulated from this evil. Why should we suffer from the evil of others?

The most obvious objection, though, is that we can’t possibly be the origin off everything that goes wrong! Take an earthquake, for instance. How (in God’s name?) did we cause that? Furthermore, while we may have a hand in some things, it is patently absurd (not to mention presumptuous) to claim that we have caused every single heatwave, flood, common cold, flat tyre and pimple (not to mention, of course, death!) since the dawn of time. This is… well, nonsense. Utter nonsense. Let’s move on.


2. The “It Serves Us Right” argument
This argument states that the evil we experience is actually our just punishment for the sins we have committed. Interestingly, it must still finger God as evil’s originator here, unless the argument is combined with the first misconception above. It’s just that we deserve the evil.

Once again the objection to this childish (and tragic) argument is obvious to anyone whose head is not numbed by propaganda. If suffering is a just punishment for our sins, why then do tyrannical dictators live out their lives in luxury while their abject subjects starve? Imagine a year-old child who is starving because his mother was killed. It would take a psychopath to honestly look this child in the eye and proclaim that starvation was its just deserts! Yet bizarrely, some still think that this argument holds water. The point is that to anyone with the remotest connection to reality, the vast majority of punishments meted out don’t match the crime committed. Lord, thy scalpel is blunt!

A horrendous offshoot of this is the contention that because ‘Adam’ ‘sinned’ ‘in the beginning’, children are born with original sin. Thus anything bad that happens to them is justified, since they are ‘sinners’! Just what is being postulated here? Some form of ‘sin transmission’ through the semen?[1] Putting minor scientific quibbles aside, this notion is irretrievably atrocious! Think about it – since when is someone, let alone an innocent child, responsible for some sin committed by someone else? Imagined the injustice of imprisoning you because your great-great-uncle cursed sometime in 1843!

The ‘justified punishment’ argument rests, if it to have any plausibility, on an incredible sense of callousness. Indeed it would do well to reflect, for a moment, on religion’s powerful ability to anaesthetise otherwise good people against suffering and injustice. To paraphrase T.S. Eliot, they may not mean to do harm, but the harm doesn’t interest them.

Once again, let us move on…


3. The “It’ll Make A Man of You” Argument
Evil may be a bad thing, but the good that comes from it justifies it – so says this argument. Evil’s place is thus defended on the grounds that it can teach us to be virtuous. For instance, the death of my parents may cause me to appreciate people all the more while they are alive.

It is interesting to point out that, once again, God’s grubby pawprints identify him as the cause of the evil here, be it justified or otherwise (unless the first ‘argument’ is used). Brushing that aside, it is certainly possible for good things to come from bad ones. I personally take some comfort from the realisation that some of the most painful events of my life have in fact contributed in a positive way to my development. But to let God off the hook in this way is fallacious. If he is omnipotent, it would certainly be possible for him to give me the wisdom, etc. that I gained through my blood, sweat and tears – but without the blood, sweat and tears. I might have to go through all the suffering in this world in order to become more wise, etc., but there is no reason why the world should work this way in the first place! We have simply not dissected things out enough if we believe that God is forced to give us the good with a measure of bad. Why could we not become wise without suffering? God is supposed to be omnipotent after all – and so by definition he must be able to accomplish this feat! Once again, thy scalpel needs sharpening, O Lord…

Like the second argument above, this argument all-too-often spills into the callous heartlessness of which religious people are particularly prone. The Oxford theologian Richard Swinburne, in a spine-chilling passage, writes:

Suppose that one less person had been burnt by the Hiroshima atomic bomb. Then there would have been less opportunity for courage and sympathy.

As Richard Dawkins remarks on this terrifying character, “Richard Swinburne is the recently retired holder of one of Britain’s most prestigious professorships of theology, and is a Fellow of the British Academy. If it’s a theologian you want, they don’t come much more distinguished. Perhaps you don’t want a theologian.” Indeed. But even if you shared Swinburne’s psychopathic detachment, you would still be left with the impossible task of explaining why the supposedly omnipotent God was incapable of providing sufficient “opportunity for courage” without resorting to the torching of civilians. And so another argument falls.


4. The “Ostriches Don’t Need To Think” Argument
Rather than concede the argument at this point, the theologian is at last forced back to the last ground any religious person holds: God works in mysterious ways. I’ll say! It is really the most desperate of ploys, and what it equates to is, “Come what may, I have faith in God’s goodness and power. No evil, no matter how great, can convince me that it isn’t actually good in disguise.” Well yes, if you stick your head in the sand like an ostrich, you will indeed see no evil. This is the sort of argument made from the heart after the head has packed up and left, and its flaws are obvious. If I don’t need any good reason for believing that God is good (“I’m just going to believe”) then the belief is as likely as the belief that invisible fairies wrote this essay in collusion with a unicorn and the Yeti. There is as much evidence for the first belief as for the second. In fact, I would argue that faeries, unicorns and the Yeti are more likely, since even though they aren’t supported by the facts, there at least aren’t strong logical arguments against their existence! I doubt whether the theologian would see this implication though, but then he’s hardly been chosen for his ability to follow an argument.


Conclusion
The problem of evil is not, by itself, a proof against God’s existence. Strictly speaking, it is just a proof against God being omnipotent and omniscient and completely benevolent. At least one of the three cardinal characteristics must be jettisoned if we are to preserve any semblance of reason on the issue.

That these issues aren’t more widely and publicly discussed within churchgoing communities is hardly surprising. Religion, in common with other extremisms, does not encourage a questioning frame of mind. Quite the opposite - accepting statements without any good reason to do so is in fact taken as a virtue! It’s labelled ‘faith’. Alone among Jesus’ disciples, the biblical Doubting Thomas takes the only rational position (in the absence of any corroborating evidence) and remains sceptical about the highly unlikely resurrection story. For his logical, scientific spirit, he is actually condemned by none other than Jesus himself.

The arguments set out above are, in my opinion, utterly compelling. It is one of the few areas in philosophy where one can be rather forthright. To someone on the atheist side of the deistic spectrum, the arguments should serve to embolden. To someone on the opposite side, they should at very least show that the conventional idea of God (in the Judeo-Christian sense) must be abandoned.

But I don’t hold out too much hope of this – I predict the ostrich reaction, deployed as ever as a defence against actually growing up.


[1] I spoke too soon. I have subsequently discovered that this was exactly what St Augustine postulated. I’m just speechless. Utterly speechless.

Special acknowledgment: Though inevitably many sources have been used for this essay, it would be remiss not to particularly mention the chapter “Does God exist?” from Stephen Law’s book “The Philosophy Gym”. Its clear elucidation of several of the arguments mentioned in this essay was a (shall we say) godsend.

Free Will (On Intellectual Integrity)

Of all things, the concept of free will seems pretty immune to doubt. As with the question “are we conscious?”, the notion of free will is felt so deeply that we recoil with incredulity should anyone ever dare to doubt it. But if pressed, most of us can offer no more than a variant on the theme, “Well, I feel as if I have free will.”

Strangely, most think that this clinches the argument in their favour. Whenever I hear this, I am always somehow reminded of a fictional scene with a man (I imagine in the 1700s) stamping his foot in frustration and protesting, “Well of course the earth isn’t moving!!!” Like his modern contemporaries, this mythical intellectual would have been baffled by our insistence that the earth was actually in orbit. But, tellingly, he would not have been able to topple our modern (contrary) argument of the basis of evidence or logic.

Fortunately for us, the way things seem has no necessary bearing on how they actually are. Imagine if they did – imagine if our belief that people should fall out of rollercoasters when they were upside down actually caused that to happen! At school, we have to be coaxed into accepting the centrifugal ‘force’ as the correct expectation, though we are fought all the way by our intuitions. In a similar vein, it is tough to accept the idea that weight is irrelevant in determining which of two dropped objects will hit the floor first. Time and time again, we intuitively reach for the heavier one when asked this question. But it would be foolish to cling to this belief after one had been shown otherwise by logic or experiment.

Examples like these could be multiplied to infinity. With any luck though, they have sufficed to illustrate that the way things appear bears no necessary relation to reality, though it (hopefully!) often corresponds to a high enough degree. We tend to proceed in life on our assumptions, and this is on the whole justified. But we all accept deep down that evidence and logic are trump cards to groundless assumptions. How many stomachs does a cow have? We may think that, like us, “one” would guarantee correctness. But enough evidence of “four” (one might be satisfied by a textbook, say, or a dissection) should always push our guess aside if we are sane.

So with this in mind, let us return to the idea of free will again. We ‘all’ think we have it, not so? Well, then it may surprise you to know that law of ‘cause and effect’, or causality, forbids it. I’ll say it again – the law of ‘cause and effect’ forbids free will.

That certain things can cause other things to happen seems obvious. The pen fell to the ground (effect) because I pushed it off the table (cause). Consider it carefully, and you will see that virtually every explanation questions starting with “why” uses the notion of cause and effect, or ‘causality’. Even human emotions are apparently subject to it:
“Why is the girl sad?” [effect]
“Because her balloon popped.” [cause]

Causality is a common sense deduction from observations about the nature of things. All of us are familiar (even if we don’t phrase it this way) with the fact that the universe seems to follow certain laws. I mean ‘laws’ in the sense that they are descriptions (not prescriptions, as in our legal system) of regularities within nature, which do not seem to be violated. The universe does not seem to vary capriciously and maliciously. For example, if an apple is ‘severed from its arboreal connection’, its conduct thereafter always obeys the approximate description entailed in Newton’s law of gravity, unless it is acted on by other forces (themselves subject to their own laws). It is almost as if the apple has no option but to accelerate towards the centre of the earth at the rate of 9.8 metres per second per second. The apple never goes upwards on gravity’s account. The idea of nature’s regularity is what the majority of science is based on. In a sense, science’s goal is to uncover nature’s laws – laws which are only possible because of the uniformity of nature.

The understanding that nature seemed to work in a particular way, and not in other ways, gave birth to the idea which we now call ‘determinism’. That is, if the present state of affairs (the ‘present system’) and the laws that affect it are known, then the future state of affairs (the ‘future system’) should theoretically be calculable. For example, if I know that a car is constantly travelling at 100km/h in a northerly direction from me, then I can infallibly know that after half an hour it will be 50km due north from me. Interestingly enough, determinism does not depend on us knowing all the laws perfectly; it suffices that there are inviolable laws (even if undiscovered) for determinism to be logically inevitable.

But if determinism is true about future systems, then it follows that the present system must have been brought about because of some past system having been subject to the laws of nature. The apple being one metre above the grass is ‘because’ a fraction of a second ago it was one and half metres above the grass and gravity acted on it. (This may be an unusual answer to give, but it is nonetheless true – as true as taking the scene a little further back and giving the conventional answer of its stalk becoming separate from the tree.) Thus, the present (i.e. ‘effect’) was caused by the past, and that past was caused by a more distant past, and so on.

And if this is true of events generally, then it must be true of things like mental states or decisions too. Like the apple, my writing this essay must have been caused by something (even if I don’t know it) – which is just another way of saying that it is the consequence of the combination of some prior state and natural laws. But this prior state itself must have been caused by some more distant state combined with natural laws. And so on, and so on once more. Thus the problem with free will is laid bare – if we follow the chain of cause and effect back far enough, eventually we must get to a state far backward enough in time to remove us from any responsibility. Nominally, we could set it at our birth, for to blame me for something in 1975 when I wasn’t yet even a thought in anyone’s mind (let alone physically around) would be absurd! Usually, however, the free will chain moves comfortably outside our personal control well before this point, but if it somehow doesn’t, it must (by logic) head off into a past for which we aren’t responsible.

Thus, if causality is preserved, we cannot have free will, as conventionally understood, because the cause of our actions/decisions/evaluations/etc. always starts outside our consciousness. I believe that it feels like we have free will because this is exactly what it would feel like when the causal chain happens to pass through our consciousness, but that is by the wayside.

The only remotely plausible retaliation to this death sentence for free will would be to deny causality. This might seem ridiculous, but it is no more ridiculous in principle than our abandoning of free will. Recalling our earlier doctrine that appearances don’t change reality (and thus that everything is open to some doubt), it would be hypocritical of us to sequester causality from scrutiny. Pleasingly for this notion, quantum physics, for example, does seems to indicate that at a subatomic level, things can actually have no cause – they just happen. Whether or not this wrecks determinism at a level any larger than this is doubtful, but let us assume that it does. Are we ‘free’ again?

As contrary to our common sense as this seems, this breakdown in causality eliminates ‘free will’ even more easily. If there were events in our brains (say) that weren’t caused by anything, they must be random. And if they are random, once more we can’t be held responsible.

So whether or not causality is correct, we can’t have free will! Notice that appealing to things as mythical as ‘souls’ doesn’t help one bit – the same questions can be brought to bear on how the souls ‘choose’ their actions. And if someone were to retaliate that souls are ‘not bound by causality’ or any other such nonsense, then quite apart from doubts as to either his or her intelligence or sanity, their argument would be fatally exposed. This is because what our hypothetical unfortunate must postulate is some sequence of events in which the components are both non-linked (i.e. non-causal) and linked (i.e. non-random). Not only is this logically impossible, but, needless to say, no such thing has ever been seen. I’ll repeat that – not only is it as crazy as describing something as both black and non-black, but there is absolutely no evidence that such a thing exists.

The only response I’ve ever had to this is the comment, usually with the sort of deer-in-the-headlights expression, is “But then there is no room for blame.” Quite right, and the scandal that we persist in our superstition long after we have trashed it is quite baffling. But the person who makes this objection usually seems to think that this is a point against the logic described above. I find this misconception so difficult to conceive of that it took me a long time to understand it. The implications of the truth can never count against its validity. It is really as pathetic as thinking that the earth must be flat because otherwise our security would be in peril, for we could be attacked by navies coming from over the horizon as well!

One final word on the matter. No matter how inconceivable, there always remains the philosophical possibility of error in any statement about the outside world. It is possible, though I cannot envisage how, that it is possible that the above isn’t completely logically binding. But I don’t need to prove absolute truth in order to show that ‘free will’ is irrational. I can’t prove strictly, to the nth degree, that the toast I am about to eat isn’t actually a secretly-disguised poison made by Martians either. But I don’t need to. All I need to do is to show that the likelihood of this is less than the likelihood that it is, well, plain old toast. Nothing in our lives can be proved, nor disproved, to the perfect degree, but some things are definitely more likely than others to be true. We would well be considered insane if we abandoned all reason and believed that the two options – toast or disguised Martian poison – were on an equal footing. And we would be committed to a mental asylum if we actually favoured the latter.
All I ask is that we do the same with concepts like free will. Then we can deal with the implications like adults afterwards.