The Ethics of Risk
eBook - ePub

The Ethics of Risk

Ethical Analysis in an Uncertain World

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

The Ethics of Risk

Ethical Analysis in an Uncertain World

Book details
Book preview
Table of contents
Citations

About This Book

When is it morally acceptable to expose others to risk? Most moral philosophers have had very little to say in answer to that question, but here is a moral philosopher who puts it at the centre of his investigations.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access The Ethics of Risk by S. Hansson in PDF and/or ePUB format, as well as other popular books in Philosophie & Éthique et philosophie morale. We have over one million books available in our catalogue for you to explore.

Information

Year
2013
ISBN
9781137333650
Part I
Why Risk Is a Problem for Ethics
1
The Uncertainties We Face
Before investigating the moral implications of our ignorance about the future, we need to characterize it and clarify the meanings of the words that we use to describe it. The most common of these is ‘risk’.
1.1 Risk
The word ‘risk’ has several well-established usages.1 Two major characteristics are common to them all. First, ‘risk’ denotes something undesirable. The tourist who hopes for a sunny week talks about the ‘risk’ of rain, but the farmer whose crops are threatened by drought will refer to the ‘chance’ rather than the ‘risk’ of precipitation.
Secondly, ‘risk’ indicates lack of knowledge.2 If we know for sure that there will be an explosion in a building that has caught fire, then we have no reason to talk about that explosion as a risk. Similarly, if we know that no explosion will take place, then there is no reason either to talk about a risk. We refer to a risk of an explosion only if we do not know whether or not it will take place. More generally speaking, when there is a risk, there must be something that has an unknown outcome. Therefore, to have knowledge about a risk means to know something about what you do not know. This is a difficult type of knowledge to assess and act upon.3
Among the several clearly distinguishable meanings of the word ‘risk’, we will begin with its two major non-quantitative meanings. First, consider the following two examples:
‘A reactor-meltdown is the most serious risk that affects nuclear energy.’
‘Lung cancer is one of the major risks that affect smokers.’
In these examples, a risk is an unwanted event that may or may not occur.4 In comparison, consider the following examples:
‘Hidden cracks in the tubing are one of the major risks in a nuclear power station.’
‘Smoking is the biggest preventable health risk in our society.’
Here, ‘risk’ denotes the cause of an unwanted event that may or may not occur (rather than the unwanted event itself). Although the two non-quantitative meanings of ‘risk’ are in principle clearly distinguishable, they are seldom kept apart in practice.
We often want to compare risks in terms of how serious they are. For this purpose, it would be sufficient to use a binary relation such as ‘is a more serious risk than’. In practice, however, numerical values are used to indicate the size or seriousness of a risk.5 There are two major ways to do this. First, ‘risk’ is sometimes identified with the probability of an unwanted event that may or may not occur.6 This usage is exemplified in phrases such as the following:
‘The risk of a meltdown during this reactor’s lifetime is less than one in 10,000.’
‘Smokers run a risk of about 50 per cent of having their lives shortened by a smoking-related disease.’
It is important to note that probability, and hence risk in this sense, always refers to a specified event or type of events. If you know the probability (risk) of power failure, this does not mean that you have a total overview of the possible negative events (risks) associated with the electrical system. There may be other such events, such as fires, electrical accidents, etc., each with their own probabilities (risks).
Many authors (and some committees) have attempted to standardize the meaning of ‘risk’ as probability, and make this the only accepted meaning of the word.7 However, this goes against important intuitions that are associated with the word. In particular, the identification of risk with probability has the problematic feature of making risk insensitive to the severity of the undesired outcome. A risk of 1 in 100 to catch a cold is less undesirable than a risk of 1 in 1000 to contract a deadly disease. Arguably, this should be reflected in a numerical measure of risk. In other words, if we want our measure to reflect the severity of the risk, then it has to be outcome-sensitive as well as probability-sensitive.8 There are many ways to construct a measure that satisfies these two criteria, but only one of them has caught on, namely the expectation value of the severity of the outcome.
Expectation value means probability-weighted value. Hence, if 200 deep-sea divers perform an operation in which the risk of death is 0.001 for each individual, then the expected number of fatalities from this operation is 0.001 × 200 = 0.2. Expectation values have the important property of being additive. Suppose that a certain operation is associated with a 0.01 probability of an accident that will kill five persons, and also with a 0.02 probability of another type of accident that will kill one person. Then the total expectation value is 0.01 × 5 + 0.02 × 1 = 0.07 deaths. In similar fashion, the expected number of deaths from a nuclear power plant is equal to the sum of the expectation values of each of the various types of accidents that can occur in the plant.9 The following is a typical example of the jargon:
‘The worst reactor-meltdown accident normally considered, which causes 50 000 deaths and has a probability of 10–8/reactor-year, contributes only about two per cent of the average health effects of reactor accidents.’10
The same author has described this as ‘[t]he only meaningful way to evaluate the riskiness of a technology’.11 Another example of this approach is offered by risk assessments of the transportation of nuclear material on roads and rails. In such assessments, the radiological risks associated with normal handling and various types of accidents are quantified, and so are non-radiological risks including fatalities caused by accidents and vehicle exhaust emissions. All this is summed up and then divided by the number of kilometres. This results in a unit risk factor that is expressed as the expected number of fatalities per kilometre.12 The risk associated with a given shipment is then obtained by multiplying the distance travelled by the unit risk factor. These calculations will provide an estimate of the total number of (statistically expected) deaths.
The use of the term ‘risk’ to denote expectation values was introduced into mainstream risk research through the influential Reactor Safety Study (WASH-1400, the Rasmussen report) in 1975.13 Many attempts have been made to establish this usage as the only recognized meaning of the term.14
The definition of risk as expected utility differs favourably from the definition of risk as probability in one important respect: It covers an additional major factor that influences our assessments of risks, namely the severity of the negative outcome. However, other factors are still left out, such as our assessments of intentionality, consent, voluntariness, and equity. Therefore, the definition of risk as expected utility leads to the exclusion of factors that may legitimately influence a risk-management decision.
At face value, the identification of risk with statistical expectation values may seem to be a terminological issue with no implications for ethics or policy. It has often been claimed that we can postulate definitions any way we want, as long as we keep track of them. But in practice our usage of redefined terms seldom loses contact with their pre-existing usage.15 There is in fact often a pernicious drift in the sense of the word ‘risk’: A discussion or an analysis begins with a general phrase such as ‘risks in the building industry’ or ‘risks in modern energy production’. This includes both dangers for which meaningful probabilities and disutilities are available and dangers for which they are not. As the analysis goes more into technical detail, the term ‘risk’ is narrowed down to the expectation value definition. Before this change in meaning, it was fairly uncontroversial that smaller risks should be preferred to larger ones. It is often taken for granted that this applies to the redefined notion of risk as well. In other words, it is assumed that a rational decision-maker is bound to judge risk issues in accordance with these expectation values (‘risks’), so that an outcome with a smaller expectation value (‘risk’) is always preferred to one with a larger expectation value. This, of course, is not so. The risk that has the smallest expectation value may have other features, such as being involuntary, that make it worse all things considered. This effect of the shift in the meaning of ‘risk’ has often passed unnoticed.
Since ‘risk’ has been widely used in various senses for more than 300 years, it should be no surprise that attempts to reserve it for a technical concept have given rise to significant communicative failures. In order to avoid such failures, it is advisable to employ a more specific term such as ‘expectation value’ for the technical concept, rather than trying to eliminate the established colloquial uses of ‘risk’.16 It seems inescapable that ‘risk’ has several meanings, including the non-quantitative ones referred to above.
Before we leave the notion of risk, a few words need to be said about the contested issue whether or not risk is an exclusively fact-based (objective) and therefore value-free concept. It is in fact quite easy to show that it is not. As we have already noted, ‘risk’ always refers to the possibility that something undesirable will happen. Due to this component of undesirability, the notion of risk is value-laden.17 This value-ladenness is often overlooked since the most discussed risks refer to events such as death, diseases and environmental damage that are uncontroversially undesirable. However, it is important not to confuse uncontroversial values with no values at all.
It is equally important not to confuse value-ladenness with lack of factual or objective content. The statement that you risk losing your leg if you tread on a landmine has both an objective component (landmines tend to dismember people who step on them) and a value-laden component (it is undesirable that you lose your leg). The propensity of these devices to mutilate is no more a subjective construct than the devices themselves.18
In this way, risk is both fact-laden and value-laden. However, there are discussants who deny this double nature of risk. Some maintain that risk is ‘objective’, devoid of any subjective component.19 Others claim that risk is plainly a ‘subjective’ phenomenon, not concerned with matters of fact.20 These are both attempts to rid a complicated concept of much of its complexity. Both are misleading. A notion of risk that connects in a reasonable way to the conditions of human life will have to accommodate both its fact-ladenness and its value-ladenness. The real challenge is to disentangle the facts and the values sufficiently from each other to make well-informed and well-ordered decision processes possible.21
1.2 Uncertainty
For some of the perils that we worry about, meaningful probabilities do not seem to be available. For an example, consider a prime minister who is contemplating whether to put forward a government bill that will be unpopular in her own party. She may spend considerable time pondering the eventuality of a defection in the party that will lead to the bill being defeated in parliament. There are many aspects of this prospect that she will spend time on, but we should not expect the numerical probability of a defection to be one of them. A politician with a ‘betting’ attitude to such a decision would not stand a chance against those who focus instead on negotiations and the formation of coalitions.
Due to the association of ‘risk’ with quantitative measurement, it is customary to use another term in cases without numerical probabilities, namely ‘uncertainty’.22 A decision is said to be made ‘under risk’ if the relevant (objective) probabilities are known and ‘under uncertainty’ if they are unknown. In one of the most influential textbooks in decision theory, the terms are defined as follows:
‘We shall say that we are in the realm of decision-making under:
(a)Certainty if each action is known to lead invariably to a specific outcome (the words prospect, stimulus, alternative, etc., are also used).
(b)Risk if each action leads to one of a set of possible specific outcomes, each outcome occurring with a known probability. The probabilities are assumed to be known to the decision-maker. For example, an action might lead to this risky outcome: a reward of $10 if a “fair” coin comes up heads and a loss of $5 if it comes up tails. Of course, certainty is a degenerate case of risk where the probabilities are 0 and 1.
(c)Uncertainty if either action or both has as its consequence a set of possible specific outcomes, but where the probabilities of these outcomes are completely unknown or are not even meaningful.’23
A few comments are in place about the notion of uncertainty. First, uncertainty differs from ‘risk’ in not implying undesirability. We can have uncertainty, also in this technical sense, about desirable future events.
Secondly, this technical usage of the terms ‘risk’ and ‘uncertainty’ differs distinctly from quotidian usage. In everyday conversations, we would not hesitate to call a danger a risk even though we cannot assign a meaningful probability to it. Furthermore, in non-technical language uncertainty is a state of mind, i.e. something that belongs to the subjective realm. In contrast, ‘risk’ has a strong objective component.24 If a person does not know whether or not the grass snake is poisonous, then she is uncertain ...

Table of contents

  1. Cover
  2. Title
  3. Introduction
  4. Part I  Why Risk Is a Problem for Ethics
  5. Part II  Making Prudent Risk Decisions
  6. Part III  Solving Conflicts of Risk
  7. Notes
  8. References
  9. Index