Tải bản đầy đủ - 0 (trang)
1 Answer One: Decision Making Under Moral Uncertainty

1 Answer One: Decision Making Under Moral Uncertainty

Tải bản đầy đủ - 0trang

5 Value Uncertainty


recommend the alternative with the highest expected moral value (e.g. Lockhart

2000). Consider the following example on this alternative:



T1 (p ¼ 0.5)

Slightly bad (À1)

Slightly good (1)

T2 (p ¼ 0.5)

Very good (100)

Very bad (À100)

Here, option A gets the total moral value 49.5 (1*0.5 ỵ 100*0.5) whereas

option B gets 49.5. Consequently, it should rationally be chosen, some theorists

argue. (Moreover, option A remains the preferred alternative even when our

credence in T1 is much higher than in T2).23

Intuitively plausible as the suggestion may seem, there is a rather forceful

objection against it: the problem of comparing the moral value between different

moral theories. Critics argue that all theories which have been suggested for how

such intertheoretic comparisons of moral value would work are implausible, which

they take to be sufficiently convincing reasons against the idea. Contrary to how it

may seem at first glance, they argue, moral values in different theories may not be

compared (Gustafsson and Torpman 2014; Sepielli 2009, 2013; Gracely 1996;

Hudson 1989).

The second suggestion is that when we have positive credence in more than one

theory, we should act on the theory in which we have most, even if not full,

credence. The suggestion takes its cue from the skeptical conclusion that

intertheoretic comparisons are not possible. Consequently, proponents of this

suggestion argue, the main intuition-pump for weighing the moral value of all our

potential moral theories into a resulting recommendation has no force. With

different theories come different standards of evaluation, and so if one theory labels

a particular action as ‘horribly wrong’ this does not mean that it is worse than

something which is labeled ‘somewhat wrong’ by another theory. All we can say is

that both consider the action to be morally wrong.

The upshot according to this suggestion is that even in face of uncertainty, if

there is one theory in which we believe more than others, we should act in

accordance with that theory.

While this strategy as well faces objections, it would take us too far to consider

them here.24 Instead, we will end this subsection with discussing the potential

problem with the moral uncertainty accounts as such: their exclusive focus on

moral theories. In the debate, moral uncertainty is characterized as credence in

more than one moral theory, and the suggested solutions are given by some or

another function of this credence and the moral values the different theories assign

to the available alternatives. There are several problems with both the characterization and the solution, however. First, the characterization seems too narrow to

Indeed, even if P(T1) ¼ 0.99, A would still be the better option.

The interested reader should turn to Gustafsson and Torpman (2014) for a recent run-down of

the common criticism and some suggested rebuttals (including modifications to the suggestion).




N. M€


capture the relevant phenomenon properly. Agents who face value uncertainty need

not even partially believe in any particular moral theory. It seems reasonable to

claim that many people do not believe in any particular moral theory at all.

Although they are committed to some values and norms, take some features of an

action – that it is kind, perhaps, or just, or produces wellbeing – to speak in favor of

or against it; but they do not subscribe to any particular account of how these

features come together which may be called a moral theory. Some may even be

moral particularists who deny that there are moral theories in any interesting sense

(Hooker and Little 2000). There is thus a worry that the debate about moral

uncertainty captures only a small part of the phenomenon of value uncertainty. If

I am uncertain about whether kindness or justice should be exercised in a particular

situation, and this uncertainty is not due to factual concerns, then this is a case of

value uncertainty whether or not I have a credence in several moral theories.

The moral uncertainty theorist may of course argue that ‘moral theory’ should be

understood broadly, including cases where we are committed to a set of values and

norms rather than to a theory in a stricter sense.25 But even if we grant this, we run

into the second, and more severe, problem: the sought solutions disregard the best

available data. Even when it is correct to say that we have positive credence in more

than one moral theory, this does not mean that our moral commitments are reduced

to this credence, that all that matters in determining what to do is the credence we

have in theories X, Y, Z etc., and what moral values these theories assign to

particular actions. When we form a belief in a moral theory, we do so because,

among other things, we take it to fit well with many of our moral judgments in

particular cases, the values we take as important, etc. Perhaps I have a strong belief,

as in the first example above, in the absolute wrongness of killing. I am uncertain

about other aspects of the duty theory which has this as an absolute rule, but I fully

believe in this particular prescription. Now if my credence is divided between this

duty theory and utilitarianism, and the choice before me is that of killing an

innocent man or not, there would be nothing strange about letting this particular

conviction play a deciding role in choosing what to do, even if I put more credence

in utilitarianism overall.

In sum, it seems as if it is exactly when we are not fully committed to one single

moral theory that it becomes central that our particular values and considered

judgments play a role in deciding what to do – that is, the very aspects the debate

about moral uncertainty reduces away.


Or she may bite the bullet, of course, arguing that she is interested in a more limited, but still

interesting problem. Even so, she faces the second problem in the main text.

5 Value Uncertainty



Answer Two: Back to First Order Normative


The second answer to what to do if clarification of the problem or our values did

not provide us with a solution takes as starting point the insight with which we

ended the last subsection: that the primary ‘data’ we have to work with when in

value uncertainty is the set of moral values, norms and particular commitments

which we hold. And when the previous answer tried to find a rational way

forward given the remaining value uncertainty, the second answer insists that

the way forward is to make your values hang together. If you value both justice

and kindness, and you want to perform the kind action as well as the – in this

case incompatible – just action, this is a signal that your values do not cohere

sufficiently. When this is the case, you must find a way to handle this


What this amounts to is that the general way forward when value uncertainty

remains is to engage in the very theme of the present anthology (Hansson and

Hirsch Hadorn 2016, Brun and Betz 2016): argumentation. It is only through

argumentation, be it introspection or deliberation (and typically a mix of the

two), based on the factual as well as normative information we may gain access

to, that we may find a solution to our value uncertainty when clarity itself is not

sufficient. In this anthology many such argumentative tools are presented. In this

chapter I will focus on what I take to be the dominating methodological development of the basic idea of how to reach coherence in moral and political philosophy:

the method of reflective equilibrium.

Reflective equilibrium is a coherentist method made popular by the political

philosopher John Rawls in his seminal book A Theory of Justice.26 While the core

idea is arguably as old as philosophy itself, Rawls’s illuminating treatment in the

context of his theory of justice (and the developments by other philosophers in its

aftermath) has become the paradigmatic instance of the method.27 (For further

analysis, see also (Brun and Betz 2016) in the current volume, where the tool of

argument maps, strongly influenced by the conception of reflective equilibrium,

is used).

When faced with a normative problem – a problem about what we should do,

how to act – we come armed with a set of beliefs about how the world is as well as

about how it should be. These beliefs can – but need not – be particularly structured

or theoretically grounded. Typically however, our arsenal of value commitments

contain both more general ones, such as perhaps the equal value of every person or

that we should try to behave kindly to others, and more particular ones, perhaps

intuitions pertaining to the very problem at hand, ‘What happens right here is


Rawls (1999 [1971]). For earlier formulations, see Rawls (1951).

For a recent analysis, see Brun (2014). In a strict sense’, reflective equilibrium refers to a state of

a belief system rather than a methodology. But it has become commonplace to refer to it as the

method through which we try to arrive at this state.



N. M€


wrong!’ The basic idea of Reflective Equilibrium is to scrutinise one’s set of beliefs,

and modify them until our normative intuitions about particular cases (which Rawls

called our ‘considered judgments’) and our general principles and values find

themselves in equilibrium.

The idea that we should modify our value commitments until they reach

equilibrium is an analogue to how we should modify factual beliefs. As with

value commitments, our factual commitments do not always cohere at the outset.

Let us imagine that the communist hunting senator McCarthy both believed that the

specter of communism haunted the United States and Europe, and also, believed

that every statement in the Communist Manifesto is false.28 So far his beliefs seem

to cohere perfectly. But what if he learnt that that the very first sentence of the

Communist Manifesto reads “The specter of communism haunts Europe.” Now, if

he learns this, we expect senator McCarthy to modify his set of beliefs until they

reach equilibrium.

In a similar vein, the method of reflective equilibrium demands that we are

prepared to abandon specific normative intuitions when we find that they do not fit

with intuitions or principles on which we rely more. Likewise for our principles and

values: if we find that on closer examination they go against normative intuitions,

principles and values that we are simply not prepared to abandon, they too must be

modified. The goal is to reach a state of equilibrium, where all relevant normative

commitments fit together.

The factual analogy further suggests how we should go about judging which,

among competing values, we should put most faith in. McCarthy should find a

coherent set of beliefs based on what he has best reason to believe in. He may,

for example, revise his belief that the US and Europe are full of communists:

perhaps he has only US statistics to go on, and without good justification

believed that what goes for the US must go for Europe as well. The stronger

his belief in the total falsity of every sentence in the manifesto, the more he must

be prepared to find a coherent set of beliefs which includes this belief, no matter

the costs. Another option is reinterpretation: as with the value propositions we

have discussed above, our factual beliefs are often vague and possible to specify,

perhaps in a way which make the set coherent without having to abandon any

belief. Senator McCarthy may perhaps remember that the Communist Manifesto

was written in 1848, a hundred years before he started his anti-communist

crusade. So the factual claim in the book clearly addresses the situation in

Europe back then, and not in the 1950s. McCarthy may then believe that Marx

and Engels were wrong about communism hundred years earlier, ‘they were

really very few back then,’ but continue believing that absolutely everything in

that book is false and that the communists swamp the western world. Similarly,

when our values are not in reflective equilibrium, we should scrutinise our

reasons for holding on to our value commitments, general or particular. Something must go.


This example is from Brandom (1994: 516).

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

1 Answer One: Decision Making Under Moral Uncertainty

Tải bản đầy đủ ngay(0 tr)