Friendship and Agent-Relative Morality
eBook - ePub

Friendship and Agent-Relative Morality

  1. 176 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Friendship and Agent-Relative Morality

Book details
Book preview
Table of contents
Citations

About This Book

First Published in 2001. Morality is viewed as a demanding and unsympathetic taskmaster, and as an external, foreign, even alien force. The moral life, on such a view, is a labor not of love, but of duty. One of the guiding intuitions of this book is that this picture of morality is deeply and pervasively wrong. Morality is not an external or alien force and is not at all disconnected from the agent's values, or from her good. Indeed, what is morally required of an agent will/depend a great deal on, and will thus reflect, that agent's values, commitments, and relationships.

Frequently asked questions

Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Friendship and Agent-Relative Morality by Troy A. Jollimore in PDF and/or ePUB format, as well as other popular books in Philosophy & Philosophy History & Theory. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2018
ISBN
9781135724290

CHAPTER 1

Introduction

1. THE OBJECTION FROM FRIENDSHIP

Moral theories demand that agents act in certain ways; and the demands of a given moral theory may of course conflict with the other demands and reasons which press on a moral agent and which arise from various sources. To an extent, this is only to be expected: moral reasons, after all, are not our only reasons for action; nor are they the only reasons on which we are ever justified in acting. But if the conflict is too deep, or too pervasive, it may give us a reason to doubt the legitimacy of the moral demands in question. One test of a moral theory, then, comes in examining the relationship of the reasons and requirements which it endorses, to an agentā€™s nonmoral (or at least, not purely moral) reasons for acting. Since this test concerns the ability of a given set of moral reasons and requirements to fit into the more general set of practical reasons which apply to an agent, we can refer to it as the Practical Adequacy Test.
Moral theories can be divided into two classes: the agent-neutral and the agent-relative. The former is sometimes associated with consequentialist and the latter with deontological moral theories. (However, as I will discuss in a moment, this is somewhat of a simplification.) Recognition of the significance of the distinction between these two types of moral theory is a common theme in recent moral philosophy.1 In this book I want to argue that theories which take agent-neutral form are doomed to fail the Practical Adequacy Test. They fail because the reasons arising from such theories conflict in a serious and unacceptable way with the demands and reasons of friendship. Specifically, the conflict is of the following sort: an agent whose life is governed by an agent-neutral moral theory (i.e. an agent who acts so as to meet the moral requirements of an agent-neutral moral theory) will be unable to be a friend, to participate in friendships, or to enjoy many of the goods and values associated with friendship. I will refer to this objection to consequentialist theories as the Objection from Friendship.
In taking this to be a serious objection to agent-neutral moral theoriesā€”indeed, an objection which, if successful, justifies us in rejecting such theoriesā€”I am taking it that friendship is an important value, and one whose flourishing a moral theory should not prevent or significantly diminish. I do not intend to defend this claim here; but I must say it seems to me so highly plausible as to require little or no defense.2 (Keep in mind that it is not sufficient to deny the claim that one be prepared to do without friendship oneself, or find friendship unsatisfying or irrelevant to oneā€™s own life; one must also be prepared to deny the general significance of friendship, and the general legitimacy of peopleā€™s claims to be allowed to take part in friendship should they so choose.) If, however, there is someone who does not find this claim plausible, then they will not be persuaded by my general line of argument against agent-neutral moral theories.
It should also be said that my purpose here is not to develop a philosophical account or analysis of the value of friendship, nor to propose a detailed account of precisely what friendship requires of us. My plan is rather to use a fairly general and intuitive account of friendshipā€”one which will be described shortlyā€”to investigate the distinction between agent-neutral and agent-relative moral theories and, ultimately, to defend the thesis that any adequate moral theory must take an agent-relative form.

2. AGENT-NEUTRALITY, AGENT-RELATIVITY, AND CONSEQUENTIALISM

As I have mentioned, agent-neutrality in ethics is often associated with consequentialist theories. As a means of understanding the idea of agent-neutrality, and the distinction between that concept and the concept of agent-relativity, it will be useful to consider how a fairly simple consequentialist theory, such as act utilitarianism, can evolve into more sophisticated forms in response to a certain sort of criticism.
Act utilitarianism demands that agents always do whatever is necessary to maximize general happiness or welfare. Thus, if an agent is able to bring about either (1) a state of affairs in which one person receives a benefit, or (2) a state of affairs in which two agents receive benefits each of which is equal in magnitude to the benefit conferred in (1), then all else being equal act utilitarianism will require the agent to bring about the second state of affairs. This is true regardless of the identities of the persons involved: thus, the fact that it is the agent herself who is to receive a benefit in the first outcome, but not in the second, would not alter the moral requirement that the agent bring about the second outcome.3
One problem with act utilitarianism is that it seems not to recognize certain apparently legitimate moral reasons for acting. For instance, suppose I have made a promise to perform a certain action. It would seem that I have a moral reason to keep my promise which arises not from the fact that doing so would lead to certain consequences in terms of general happiness or well-being, but which arises instead directly from the fact of my having made a promise. But act utilitarianism can only recognize reasons of the former sort. The act utilitarian can argue, of course, that it generally maximizes welfare to keep our promises. But even if it is true that promise-keeping usually maximizes welfare, there will be cases in which it does not; and even if promise-keeping always maximized welfare, this would still not allow act utilitarianism to accommodate the claim that we have moral reasons which arise not from facts regarding the effects of our acts on general welfare, but rather from facts regarding commitments we have already made.
Some have thought, however, that act utilitarianism could be modified so as to avoid such difficulties simply by modifying its account of the good. The problem with promise-keeping, for instance, can be avoided by adopting a theory of the good which counts the breaking of a promise as itself a kind of bad consequence.4 On the resulting theoryā€”call it Promise Consequentialismā€”an outcome in which all promises are kept is ceteris paribus better than an outcome in which some promises are broken. Such a theory, which is consequentialist but not strictly utilitarian, seems able to recognize the direct relevance of facts regarding promises an agent has made prior to acting precisely because its criterion for determining the value of an actionā€™s consequences takes into account facts regarding whether promises have been made and then broken.
Although it might be doubted whether the concept of a consequence can really be stretched this far, on the whole this strategy has been accepted as a valid course for consequentialist theories to take. And despite the fact that such theories can differ from simpler act utilitarian theories in many ways, there is an important common element between them. Because Promise Consequentialism counts the breaking of a promise as itself a bad consequence of an action, it is able to recognize that a promiseā€™s being kept has a kind of intrinsic moral value, and is also able to rebut the charge that a consequentialist agent will often be required to break her promises so as to maximize the greater good. But now suppose that an agent were to face a choice between bringing about an outcome in which one promise is broken and bringing about (or allowing to come aboutā€”which, for the consequentialist, amounts to the same thing) an outcome in which two promises are broken. All else being equal, Promise Consequentialism would clearly require the agent to bring about the former rather than the latter state of affairs. And this would be true even if, in the former state of affairs, it is the agent himself who breaks a promise, whereas in the latter state of affairs the promises are broken by people other than the agent in question.
The situation is analogous to the one considered above, in which act utilitarianism required an agent to forgo a benefit to herself in order to bring about an outcome in which two other people received identical benefits. Thus, while Promise Consequentialism has a more sophisticated conception of the good than does act utilitarianism, the two theories are similar in a very important way: they are both agent-neutral, in the sense in which I will be using that term. The idea of agent-neutrality should now be easy to understand. As it appears in consequentialist theories, it is simply the idea that the (relative) value of a state of affairs is the same regardless of whose perspective we are evaluating it from.5 For an act utilitarian, for instance, all that matters in determining the value of a state of affairs is how much welfare that state of affairs contains. The value is thus the same relative to everyone: a situation in which 10 people are doing very well and 1 is doing very badly is, according to act utilitarianism, more valuable and thus morally preferable to one in which 9 people do very well and 2 people do very badly, and the fact that it is me who does very badly in the former and very well in the latter would simply not be relevant to the question of which outcome I (or anyone else) ought to bring about. Similarly, Promise Consequentialism holds the state of affairs containing the least number of promise-breakings overall to be the most valuable one, and therefore holds it to be the one I (and everyone else) ought to bring about, even if, as it happens, that state of affairs happens not to be the one in which I commit the least number of promise-breakings.6
It must be emphasized that what matters here is facts regarding the relative value of a given outcomeā€”that is, facts regarding which outcomes are better than the outcome in question, and which outcomes are worse. An agent-neutral moral theory need not, then, be committed to the thesis that we can measure the values of outcomes in a sense that would allow us to assign a number to each corresponding to the amount of value it contains. If a moral theory holds that states of affairs can be ranked in terms of value, and if the comprehensive ranking of states of affairs it provides is the same for all agents, then that moral theory is agent-neutral, regardless of whether the theory allows that the magnitude of the value embodied in any given outcome can be numerically represented.
This point can be illustrated through an example. Given a world of three individuals, act utilitarianism may provide us with the following value ranking of a set of four possible states of affairs, which we may call (in order of decreasing value) A, B, C and D:
Table 1.1: A Utilitarian Value Ranking of Four States of Affairs
Ian
Shelly
Nora
(A)
5
5
10
(B)
3
7
8
(C)
7
5
2
(D)
12
āˆ’5
āˆ’5
Each number represents the amount of welfare enjoyed by a given person in a given state of affairs. (A), in which the agentsā€™ welfare totals 20, is a more valuable state of affairs than (B), in which the total is 18; similarly (B) is more valuable than (C), and (C) more valuable than (D). In this example it is possible to assign a number to each state of affairs, corresponding to its value as estimated by act utilitarianism. But, as emphasized earlier, this is only incidental to the question of agent-neutrality. What matters so far as agent-neutrality is concerned is simply the fact that the ranking of states of affairs in terms of value is the same for all agents: everyone has moral reason to prefer (A) to (B), (B) to (C), and (C) to (D)ā€”including Ian, who of course does much better in (D) than in (A). Any moral agent, then, who is in a position to bring about either (A) or (D) ought, according to act utilitarianism, to bring about (A). The fact that the identity of the agent is irrelevant to which outcome an agent ought to bring about explains why agent-neutral theories such as act utilitarianism are said to be based around the moral significance of the impersonal point of view. As Derek Parfit describes it, such theories give all moral agents the same aim.7
Some theories, by contrast, are agent-relative. Consider, for instance, a theory which tells us that the most valuable outcome each person can bring aboutā€”and thus, the outcome she morally ought to bring aboutā€”is the one in which she keeps her own promises. On such a theory, the question of whe...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. List of Tables
  7. Preface
  8. Acknowledgments
  9. Chapter 1: Introduction
  10. Chapter 2: Consequentialism and Friendship
  11. Chapter 3: Morality and Its Limits
  12. Chapter 4: Agent-Neutrality
  13. Chapter 5: Three Accounts of Agent-Relativity
  14. Chapter 6: Agent-Relativity: The Moral Preferability Account
  15. Bibliography
  16. Index