In the Name of God
eBook - ePub

In the Name of God

The Evolutionary Origins of Religious Ethics and Violence

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

In the Name of God

The Evolutionary Origins of Religious Ethics and Violence

Book details
Book preview
Table of contents
Citations

About This Book

Religion is one of the most powerful forces running through human history, and although often presented as a force for good, its impact is frequently violent and divisive. This provocative work brings together cutting-edge research from both evolutionary and cognitive psychology to help readers understand the psychological structure of religious morality and the origins of religious violence.

  • Introduces a fundamentally new approach to the analysis of religion in a style accessible to the general reader
  • Applies insights from evolutionary and cognitive psychology to both Judaism and Christianity, and their texts, to help understand the origins of religious violence
  • Argues that religious violence is grounded in the moral psychology of religion
  • Illustrates its controversial argument with reference to the 9/11 terrorist attacks, and the response to the attacks from both the terrorists and the President. Suggests strategies for beginning to counter the divisive aspects of religion
  • Discusses the role of religion and religious criticism in the contemporary world. Argues for a position sceptical of the moral authority of religion, while also critiquing the excesses of the "new atheists" for failing to appreciate the moral contributions of religion
  • Awarded Honourable Mention, 2010 Prose Awards

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access In the Name of God by John Teehan in PDF and/or ePUB format, as well as other popular books in Filosofia & Filosofia etica e morale. We have over one million books available in our catalogue for you to explore.

Information

Year
2011
ISBN
9781444359138
1
THE EVOLUTION OF MORALITY
Many lands saw Zarathustra, and many peoples: thus he discovered the good and bad of many peoples. No greater power did Zarathustra find on earth than good and bad. No people could live without first valuing; if a people will maintain itself, however, it must not value as its neighbour valueth. (Friedrich Nietzsche1)
Setting the Task
Evolution via natural selection is a fairly simple, straightforward process. You may be forgiven if you find this strange given all the fireworks surrounding public discussions of evolution, as well as the confused caricatures offered by its foes.2 But, in fact, it is a fairly simple, straightforward process. Ernst Mayer commented that Darwin’s theory of natural selection is the conclusion of “one long argument,” as Darwin himself put it, and is based on three uncontroversial principles – inheritance, variation, and competition – that are simple to state and comprehend:3
Inheritance: Offspring tend to inherit the characteristics of their parents. Variation: Offspring will also vary from their parents and from their siblings. In addition, individuals from different families and different species also vary.
Competition: Life is a competition for limited resources in which it is not possible for all individuals to succeed. Not all individuals can reproduce and have offspring who themselves successfully reproduce.
From these simple observations Darwin deduced the principle of natural selection: variations that provide an advantage in the competition for resources tend to be passed on (inheritance) to the next generation.
The corollary to this is that those variations that are not advantageous may not be passed on. “Advantageous” is a relative term. In evolution, a trait provides an advantage if it contributes to an individual surviving to the age of reproduction and reproducing successfully. Success is measured in terms of differential reproduction: which variation(s) allows an individual to out-reproduce its competitors. In the next generation the genes of the successful reproducers will be better represented than those of less successful reproducers. This accumulation of this differential reproductive success, carried out generation after generation, is evolution.
The strength of Darwin’s theory is not simply the overwhelming evidence that supports it, evidence supplied by diverse fields such as genetics, microbiology, anthropology, ethology, botany, and paleontology, but also the undeniable logic of the argument. Daniel Dennett argues that what Darwin discovered is the algorithm of natural history. As Dennett puts it, “An algorithm is a certain sort of formal process that can be counted on – logically – to yield a certain sort of result whenever it is ‘run’ or instantiated.”4 Given that there is differential reproduction (not all individuals will be equally fecund) and that certain traits contribute to successful reproduction, and that parents pass their traits to their offspring, it follows necessarily that those traits will be better represented in the next generation. That Darwin saw this when no one else had is the basis of his genius.
What is not quite so obvious is just how much can be explained by this process of natural selection. What we know, and Darwin did not, is that inheritance works by passing genes from one generation to the next. Genes underlie traits that lead to successful reproduction and get passed on to the next generation. Evolutionary change is driven by differential representation of genes in the gene pool, and here we can begin to see the challenge to an evolutionary account of morality. What makes for a successful gene? In strict evolutionary terms a successful gene is one that gets more copies of itself into the next generation – that’s it. Richard Dawkins sets it out as follows:
Genes are competing directly with their alleles for survival, since their alleles in the gene pool are rivals for their slot on the chromosomes of future generations. Any gene that behaves in such a way as to increase its own survival chances in the gene pool at the expense of its alleles will, by definition, tautologously, tend to survive. The gene is the basic unit of selfishness.5
Before proceeding we need to be clear on the use of language when discussing genes and evolution. Using metaphors is almost unavoidable when discussing these issues (or almost any issue, really), particularly if we want to avoid overly technical and tedious qualifications every time the issues come up. Let it be stated here: Genes do not behave selfishly, or morally, or in any other way. Genes encode the directions for the production of proteins, which are the material for the construction of phenotypic structures, such as bodies and brains, which do all the behaving. In more technical language, to say that a gene is “selfish” is to say that it leads to conditions that tend to make its own reproduction more likely than that of an alternative gene.6
Here is the problem for an evolutionary account of morality: If successful genes are “selfish” genes, then it seems to follow that these genes will lead to organisms and traits that are also “selfish.” After all, it is the behavior of the organism that determines whether its genes succeed. Organisms should behave in ways that promote their own reproductive success – and this is what we find throughout the living world. However, in certain species this “selfishness” is tempered by cooperative behavior, and cooperation needs to be explained. Some cooperative behavior can be explained as mutualism. For example, you and I join together to hunt an animal neither of us could kill alone. We share the risks and then we share the meat. Neither of us is really making a sacrifice for the other, and we can explain this strictly in terms of self-interest. But not all cooperation is like this; take, for example, fighting off a predator. If a dangerous animal attacks us we will be better able to defeat it if we join together and share the risks. I, however, would be better off allowing you to fight the animal by yourself and assume all the risks, while I run for safety. But perhaps I am not fast enough to get away and my only chance is to stay and fight, so I join forces with you. In that case, why would you stay and assume the risks instead of running for safety? Remember the old joke about two friends confronted by a bear in the woods. One turns to the other and asks, “What should we do?” The other says, “Run.” The first friend then asks, “Do you think we can outrun the bear?” To which his friend replies, “I don’t need to outrun the bear, I only have to outrun you!” It seems evolution would favor genes that support the “run” strategy rather than the “cooperate” strategy.
Even in situations where mutual advantage seems to justify cooperation, things are not as clear as they first appear. Say you and I have hunted successfully for food. Why should I share the meat rather than take it all for myself? Again, I may not be strong enough to overpower you and so it may be safer for me to share; but then why would you share the meat with me? If one of us is strong enough to take all the meat, sharing seems to be a selfless act inconsistent with “selfish” genes. Evolution should favor genes that lead to abilities that allow one to take all the food, rather than to a willingness to share. This has some dire implications. As Thomas Hobbes reasoned, if I am rationally self-interested I will never share when I can take it all, and neither will you. This means it will never make sense to enter into a cooperative venture unless I am confident that I can exploit your trust; and since you are equally rationally self-interested, and will recognize the same logic, you will never trust me, and so cooperation can never get off the ground.7
This is captured nicely in the famous Prisoner’s Dilemma game. There are many variations, but the basic scenario runs something like this: Brian and Joe are arrested for committing a crime. The police do not have enough evidence to put either away on the most serious charge, which would carry, say, a five-year sentence, but could put both away for one year on a lesser charge. So they separately offer the two a deal: If they each testify against the other they will reduce the penalty to three years. However, if one testifies against the other who refuses to confess, then that person gets to go free, while the one who keeps quiet gets the full five years in prison. Both know the other has been made the same offer, but they cannot communicate with each other.
The best outcome for Brian and Joe as partners is for them to cooperate and both keep quiet; but the best outcome individually is for them to take the deal and testify against their partner, that is, to defect from their partnership. For Brian knows that if he talks while Joe keeps quiet (the cooperation option), then he (Brian) gets to go home, but if he decides to cooperate with Joe, and Joe does not reciprocate, then he is in deep trouble and is looking at five years in jail. So, regardless of what his partner does, Brian’s best outcome is to not cooperate, and evolution should favor creatures that make the decision that best serves their self-interest.
However, despite considerable barriers, we know that cooperation does occur; humans have always lived in groups, and are descended from ape-like ancestors that also lived in groups. Group living requires cooperation and so we must have devised strategies to get around these selfish barriers. In fact, the large-scale cooperation characteristic of human societies may be our defining human trait. Theorist Martin Nowak points out, “From hunter-gatherer societies to nation-states, cooperation is the decisive organizing principle of human society. No other life form on Earth is engaged in the same complex games of cooperation and defection.”8 The challenge is to discover how such strategies evolved.
The Prisoner’s Dilemma game has generated a great deal of experimental work.9 A significant insight into developing an account of how a cooperative strategy might evolve is the recognition that in nature such cooperative dilemmas are often not one- shot deals, particularly for social creatures. Individuals often have repeated opportunities for cooperative interactions, with the possibility of having future interactions with the same partners. In iterated Prisoner’s Dilemma games, cooperation can develop because the costs and benefits of cooperation are averaged over repeated events. In this scenario the long-term benefits of cooperation can outweigh the potential immediate costs. There is much discussion on how cooperation in the Prisoner’ s Dilemma may evolve, that is, which strategy poses the winning formula. For now, let’s grant the possibility that repeated opportunities to interact with the same pool of individuals may allow cooperation to develop as a long-term rationally self-interested strategy, and so be consistent with the evolution of “selfish” genes. But this does not capture the extent of the human tendency to cooperate. There is a wealth of experimental data that indicate that humans are predisposed to cooperate and share resources with others, even when there is no possibility of meeting that individual again. To get a sense of this we need to introduce the Ultimatum game.
In the Ultimatum game two subjects have to make individual decisions on the division of a sum of money. One individual, Sue, is first given a sum, for example, $100, and is instructed to divide the money between herself and a second individual, Pat, who Sue knows will be given the option of accepting or rejecting the offer. If Pat accepts the offer, both individuals receive the sums proposed by Sue. If Pat rejects the offer, neither one gets anything. This is a one-shot interaction, there is only one round of play and the subjects do not know each other. What should they do? If both are rationally self-interested individuals, then Sue should offer Pat a small cut of the money and Pat should accept it. Even if Sue offers only $1, Pat should accept because the options are to accept and get $1 or reject and get nothing. If Sue believes Pat is rationally self- interested, then she should never offer anything above $1 because to do so would unnecessarily reduce her own benefits in order to benefit another. This is just what should happen if we are Hobbesian individuals, but in fact it is not what happens. The experimental data show that individuals regularly reject low offers.
Studies show that “a robust result in this experiment is that proposals giving the responder shares below 25% of the available money are rejected with a very high probability”10 – suggesting that responders “do not behave to maximize self-interest” but instead reject what they consider to be unfair offers. Furthermore, the proposers seem to recognize this, as the most common proposal in these games is close to 50/50.’ 11 Joseph Henrich and colleagues conducted a cross-cultural study of this effect. They had participants from fifteen diverse cultures from Africa, Asia, Oceania, rural America, South America, and that most peculiar population, U.S. college freshmen, play the Ultimatum game. While they found “substantial differences across populations” they also discovered “a universal pattern, with an increasing proportion of individuals from every society choosing to punish [i.e. reject offers] as offers approach zero.”12
In one sense, the Ultimatum game is not testing the willingness to cooperate as much as the willingness to punish those who do not cooperate. But this willingness to engage in “costly punishment” also needs to be explained as it meets evolutionary challenges as well as cooperation does.13 In each case the individual makes a choice that is costly in terms of resources. In cooperation, I invest my resources in another’s well-being; in punishment, I commit resources to punish, thereby incurring a cost. From a rationally self-interested position, to punish someone who has treated me unfairly is simply to further waste my resources.
These studies are all addressing the problem of altruism, defined as “behavior that benefits another organism – while being apparently detrimental to the organism performing the behavior,” with benefits and costs determined by the effects on an individual’s reproductive fitness.14 The problem is to understand how behavior that lowers an agent’ s fitness in order to raise the fitness of another can arise from a process driven by so- called selfish genes. As Dawkins has put it, “at the level of the gene, altruism must be bad and selfishness good.”15
The Ultimatum game suggests one way to promote altruistic or cooperative behavior: Punish those who do not cooperate. As noted, the typical offer in an Ultimatum game is nearly an even split. Why would this be? To return to our scenario, Sue realizes that if she offers Pat too little, Pat may reject the offer and Sue will end up with nothing. In effect, Sue recognizes Pat’ s ability to punish Sue’ s greediness and so offers a fairer distribution of the goods. A variant of the Ultimatum game, known as the Dictator game, supports this interpretation. The Dictator game works the same as the Ultimatum game, except the second subject cannot reject the offer (the Ultimatum game: Take it or leave it; the Dictator game: Take it!). As we might expect, “the average amount given to the responders in the dictator game is much lower than that in the ultimatum game.”16 The role of punishment in promoting cooperation turns out to be of great significance, and I return to this topic later.
However, as we have said, punishment just repackages the problem of altruism. Punishment may turn out to be vital to large-scale cooperation, but punishing someone comes at a cost, which may or may not be paid back. As the Ultimatum game shows, there is a robust human tendency to punish unfair behavior, even when there is no possib...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Dedication
  5. Acknowledgments
  6. INTRODUCTION: EVOLUTION AND MIND
  7. 1: THE EVOLUTION OF MORALITY
  8. 2: THE EVOLUTION OF MORAL RELIGIONS
  9. 3: EVOLUTIONARY RELIGIOUS ETHICS: JUDAISM
  10. 4: EVOLUTIONARY RELIGIOUS ETHICS: CHRISTIANITY
  11. 5: RELIGION, VIOLENCE, AND THE EVOLVED MIND
  12. 6: RELIGION EVOLVING
  13. NOTES
  14. BIBLIOGRAPHY
  15. INDEX