Tải bản đầy đủ - 0trang
ETHICS, SCIENCE, AND RELIGION
people diﬀerent things at diﬀerent places and times. And the enterprise of trying
to work out what the gods are like and what they wish us to do, by starting from
the world as we have it and as we can best reason about it, leads to nothing. As
David Hume put it in his inimitable way (Hume 1999/1748: §11, para. 22),
While we argue from the course of nature, and infer a particular intelligent cause, which ﬁrst bestowed, and still preserves order in the universe, we embrace a principle, which is both uncertain and useless. It is
uncertain; because the subject lies entirely beyond the reach of human
experience. It is useless; because our knowledge of this cause being
derived entirely from the course of nature, we can never, according to
the rules of just reasoning, return back from the cause with any new
inference, or making additions to the common and experienced course
of nature, establish any new principles of conduct and behaviour.
With this door ﬁrmly closed, the only recourse is to privilege some one revelation: a preferred text or authority to whom the power of dictating ethics is
ceded. But such a choice cannot rationally settle the question of whether the
conﬁdence is rightly placed, or whether we have to pick and choose from among
the dictates that the authority issues. We have to ﬁlter those through our own
sense of what is good or bad, right or wrong. We may defer to authority, but we
retain the power to judge its deliverances.
All this is plain enough, and in the modern world the threat from religion is
easier to diagnose and to neutralize than that from science. Indeed, many ﬁrebreathing agnostics and atheists would doubt that human religions can bring
anything at all of value to the enterprise of thinking how to live. They might
think of religions as outdated, “primitive” attempts at science, better to be
retired in favor of the real thing (Dawkins 2006). They might think of the
obvious downside: religions as the nurseries of sectarian conﬂict, and as repositories of bigotry, fanaticism, intolerance, misogyny, xenophobia, and racism.
They might think of religions as responsible for the arrogance of human beings,
both in their dealings with others who are identiﬁed as in need of conversion,
and in dealings with the animal world and the natural world, supposed to be
there entirely for our beneﬁt. They might see religions as bastions of ignorance
and enemies of free inquiry.
All these points must be acknowledged, and they indeed disqualify us from
handing ethics over entirely to religious authority. But they need not by themselves disqualify religions from having a voice in the conversations in which we
try to discover what to do. They may deserve to do so not as fountains of edicts
and commands, but because of their time-tested ability to tap into the emotional
and social needs of people (Durkheim 2001/1912). The point here is that we
cannot be content to leave it a mystery why religions are so appealing and why
they have such staying power. This is something that needs explanation, and the
ETHICS, SCIENCE, AND RELIGION
only plausible explanations are likely to tell us important truths about ourselves.
Even if religions only ﬂourish because of ignorance and gullibility, as the more
patronizing atheists suppose, then we need a politics and a morality contoured to
ignorant and gullible people. If more realistically we agree that they speak to
emotions such as terror of the unknown, or to our need to cope with common
vulnerabilities to distress, sickness, and loss, or to the desire for hope or consolation, then our politics and ethics need to remind themselves that we are
fearful and vulnerable people who crave hope and consolation. It will be no
good crafting policies suitable only for rational, unemotional, self-suﬃcient,
and self-governing people if the whole course of human history contains unmistakable signs that this is not what we are like. A mature religion will be a repository of social, emotional, and spiritual expressions of a culture. The people will
not have been perfect: if they were misogynistic and intolerant and xenophobic
then their expressions will reﬂect that, and will in turn cement and perpetuate
the vices. But they will not have been all bad either: if they needed to ﬁnd ceremonies and words to express togetherness, grief, loss, desolation, hope, reconciliation, then we will need to do the same in our own ways. It is as foolish for
ethics to ignore the resources that this makes available as it would be to
ignore the rest of the literature, music, architecture, or art of our culture. The
path of wisdom will be not to ignore religious expressions, but to discriminate
A more interesting functional story will itself consider the relation between
religion and ethics. In spite of Plato and Hume, many people ﬁnd an absence of
religious authority disquieting. They fear that it leaves only nihilism, or the loss
of values, that it leaves us rudderless and incapable of true principle and true
commitment. They resonate to Dostoevsky’s dark aphorism that if God is dead,
everything is permitted. Well into the modern era the word “atheist” served as a
virtual synonym for being amoral, or unprincipled, or a libertine (Berman 1990).
Philosophers may patronize all this as a mistake, but if it is a mistake to which
people are highly vulnerable, then perhaps one of the most important functions
of religion is precisely to serve as a bulwark against the loss of values or the loss
of principles. The gods of battle fortify people, and a ﬁghting unit with public
rituals whereby each member knows that each of the others makes the right
overtures to these deities, is very likely to be better than one in which many are
suspected not to do so (and this begins to explain why atheism is so often seen as
dangerous and corrosive). It is as if by taking part in the right ceremonies, or
hearing ourselves say the right words, we armor ourselves as we need to do
against faltering when things become diﬃcult. Still more, by seeing our neighbors
simply reciting the ten commandments or singing the right songs, we can be
reassured, as we need to be, that they share enough of our own values, that they
too can be relied upon or trusted. Religion, then, is a “hard to fake sign of
commitment” (Irons 2001). It does not give the source of values, but in many
social and political circumstances, helps people to stand by them.
Ethics and science
Religion is visibly in the ethical business of shaping and aﬃrming our practical
identities, whereas on the face of it science has a much less intimate relationship
with ethics. The oﬃce of science is to tell us what the world is like; the oﬃce of
ethics is to direct how we respond to it. The gap between science and ethics is
canonized in the “fact–value” distinction or the “is–ought” gap: closely related
or identical ways of reminding ourselves that it is one thing to know how things
stand, but another to know what may be done or should be done about them.
Science is in the business of cognition, and ethics in the business of conation, or
the directions of the will and of choice, attitudes, and emotions.
The diﬀerence is sometimes put by denying that we can infer a value from a
fact, or by claiming that doing so commits something called the “naturalistic
fallacy,” but these thoughts are incorrect. Suppose we consider norms or obligations. It is quite in order to infer that we have to do something from the fact
that a child is injured and we are the nearest source of help, or to infer that we
ought to turn down the music from the fact that a neighbor is trying to sleep.
Good people will be guided by the fact to the appropriate belief about what they
should do. They would ﬁnd the inferences as immediate and inexorable as any
other. The question is only one of the kind of defect on show if a person is not
guided as they should be. It is not a question of semantics or meaning: someone
could understand the situation in the right terms, but deny the obligation, and
they would not therefore convict themselves of some linguistic error. This was
what G. E. Moore was concerned to show when he christened the naturalistic
fallacy. The fallacy lies not in making the inference, but in supposing that it can
be underwritten or compelled simply by a deﬁnition. The failure is not a defect
of logic either: the person who is not guided in the right way is not on the face of
it on the road to contradicting himself, which is the pre-eminent sign of logical
failure. Rather the defect is one of what St Augustine called “the pull of the will
and of love.” In other words, it is a defect of practical coloration or motivation,
of failing to be moved or to care as the situation requires. The person who is not
moved by the facts of the case as the virtuous person would be is not usually
semantically or logically challenged, but simply careless, callous, or selﬁsh: in
In spite of all this, science has imperialistic ambitions, and it is certainly
tempting ﬁrstly for scientists to believe that they have something special to oﬀer
to ethics, and secondly for philosophers to hope that some of the prestige of
science might rub oﬀ onto their subject, revealing it as truly objective, empirical,
experimental, and therefore authoritative. Indeed, for a long time biology in
particular has been a vociferous contributor to the ethical conversation, often
exhibiting implications, or supposed implications, of the Darwinian account of
evolution for human nature. The discovery that we have to think of ourselves as
closely related to other animals, and like them needing to live and breed in
ETHICS, SCIENCE, AND RELIGION
jungles red in tooth and claw, could hardly fail to inﬂuence interpretations of
human nature, generally in the pessimistic direction of seeing ourselves as
necessarily characterized by selﬁshness, deceitfulness, manipulativeness, promiscuity, and a complete lack of expensive emotions including altruism, empathy, guilt, or remorse. But this is a description of a psychopath rather than a
normal human being, and science itself has contributed to casting doubt on the
misinterpretations of Darwinism that superﬁcially seemed to require it to apply
to us. There are circumstances in nature in which less aggressive organisms are
more successful than more lethal ones, and in which organisms survive by being
parts of wholes within which cooperation is as pronounced as competition. It is
then a question of interpreting the history of humanity, and its diﬀerent manifestations at diﬀerent times and in diﬀerent circumstances, to ﬁnd out which
traits come to prevail. So, rather than imposing a monolithic and unalterable
“human nature,” evolution is better seen as equipping us with functions for
absorbing environmental data, and forming the motivational states that eventually move us in response to what we ﬁnd. Human nature would turn out to be
quite plastic, ready to be molded and shaped by experience, including most
saliently our experience of other people. Our plasticity enables surrounding
culture to mold our sympathies, our capacity for taking another’s point of view,
our sense of justice and the sources of pride, self-worth, shame, and guilt, just as
the plasticity of our linguistic abilities equipped us to pick up any of the mother
tongues with which we might have been surrounded.
Currently, however, there is an explosion of interest in a much wider conﬂuence of disciplines: biology, developmental psychology, evolutionary psychology, cognitive science, chemistry, neurophysiology, neurology, game theory,
economics, and sociology, hopefully pulling together to generate a truly scientiﬁc
account of human nature: who we are and how our practical lives should proceed (Hauser 2006; Sinnott-Armstrong 2008). The size of the explosion is partly
due to the arrival of new experimental methods. In the brain sciences, there is
the relatively recent appearance of neural imaging and the widespread dissemination of hitherto specialist studies of brain damage and its eﬀects (Damasio
1994; Greene 2008). For the ﬁrst time we can learn, for example, whether the
areas of the brain that are excited by emotional events are also excited when we
exert moral judgment, or whether a lesion that destroys emotional responses
also destroys moral ones. In the social sciences there is the even more recent
harnessing of the World Wide Web for cross-cultural research on a scale that
was hitherto impossible. For the ﬁrst time we can learn whether quite complex
moral and ethical ideas are conﬁned to our own culture, or whether they have a
universal appeal (Hauser 2006; Knobe and Nichols 2008). For example, studies
have tested whether the so-called principle of double eﬀect, the idea that it can
be permissible to tolerate a foreseen but unwanted event as a bad side eﬀect of a
plan, when it would be impermissible to intend the same event as part of the
execution of an equivalent plan, is a parochial and variable feature of some
moral schemes, or is felt to be compelling across the world. The more widely a
principle is found, the more likely it may seem that it is part of our human
birthright, perhaps by being a deliverance of an innate “moral module” molded
These advances in instrumentation are exciting, and nobody can doubt that
they have produced fascinating results. This is compatible with recognizing one
or two caveats about their use, of which practitioners are largely aware. One
caveat about web surveys is quite obvious. Respondents are self-selected, and
only drawn from the admittedly huge community of people who ﬁrst have access
to computers, second have the leisure to ﬁll in questionnaires, and third have an
interest in the kind of question being asked. Finally, while they give us a time
slice of responses, they do not enable a diachronic look at how human beings
might have thought and felt at diﬀerent times, nor indeed may they be a reliable
guide to how they feel, as opposed to how they say they feel, even at the present
time. Thus, suppose a survey showed that worldwide nearly everyone who
answers the question ticks the box saying they approve of treating animals
kindly. We would be ill-advised to see this as a constant of human nature either
through time or even in the present, in the face of abundant evidence of the
contrary from history and from the courts. Surveys are better at telling whether
people talk the talk, rather than whether they walk the walk.
The idea of an innate moral module, universally programmed to give all
human moralities a shared underlying structure, is modeled on Chomsky’s
similar hypothesis about an innate grammar constraining the form that possible
human language may take. The principal argument for Chomsky’s innatism was
the celebrated “poverty of stimulus”: the disparity between what he saw as the
meagre input to the infant’s learning of a language, and the torrential output,
which is the ability to see immediately and without conscious thought the syntactic structure (and the meaning or semantics) of any of the vast number of
sentences that can be formed in its native language. Furthermore we have only
the haziest idea of how we do it, or how we could write down programs for
computing grammaticality, or principles for showing other people how to do it.
The moral case bears only a rather distant analogy to this. The perception of
meaning is quick, unconscious, inﬂexible or immune to inﬂuence from collateral
information, and perfectly certain across a prodigiously wide spectrum. By contrast our moral judgments are often hesitant, articulable in conscious thought,
responsive to collateral information, while certainties are far from abundant and
far from the norm. Finally the universal features that have been discovered
include such things as the principle of double eﬀect, sensitivity to a diﬀerence
between actions and failures to act (it being worse to kill than to let die), and
proximity to harm as increasing moral responsibility. But it is not diﬃcult to see
how the ordinary constancies of human life and infant upbringing could bring
about a universal sensitivity to these features. The child who is emphatically
taught not to eat dirt will only later and probably with less emphasis be told that
ETHICS, SCIENCE, AND RELIGION
he is responsible for standing by and letting his little brother eat dirt; and it is
cases in which we can most easily see the eﬀects of our actions that form the
paradigms and prototypes of bad behavior. A child is inducted in all this by
examples, stories, and admonitions as well as by imitating and practicing the
activities of allowing and forbidding conduct. So the disparity between the
materials with which it learns and what it makes of them is much less marked
than in the linguistic case, and there is correspondingly less plausibility to any
argument from the poverty of stimulus to a richly structured innate endowment.
A second caveat reverts to the is/ought distinction. Suppose we do have a
robust piece of data about how nearly everybody feels about some question. It is
still open to the moralist to lament this uniformity as the unfortunate product of
a fallen human nature, and indeed there are clear examples where this is a fairly
natural response. Utilitarians, for instance, pride themselves on the doctrine that
everyone counts for one and nobody for more than one; the “calculus”
of happiness has no discounts or multipliers, and it is only the total aggregate of
happiness that underwrites value. A sociological result that in fact people do
discount for distance, having extremely tribal moralities that separate insiders
from outsiders in invidious ways, is saddening no doubt, but it does nothing to
impugn utilitarianism as an ideal, or even as a practical value to preach. It may
suggest that there will be an awful lot of preaching to be done, or even that while
we may be able to move towards the ideal it is unlikely that we will ever reach it.
But then sober utilitarians will always have known that. The same structure
applies if we ﬁnd that men are generally more faithless than women, or that
people ﬁnd it more tolerable or permissible to pull a lever whereby someone
dies at a distance, than to be in close proximity and push them oﬀ a bridge. The
natural response is that no doubt we feel like that, and it may be evolutionarily
explicable why we do, but we shouldn’t.
The widespread interest in the results of the brain sciences has generated work
of a diﬀerent kind, largely hoping to disentangle the various contributions of
emotional or “aﬀect” mechanisms from cognitive or rational mechanisms in the
formation of moral judgement. The hope is that this work will speak to the old
division between sentiment-based approaches to moral philosophy, whose principal ﬁgurehead is David Hume, and more ratiocentric approaches, associated
above all with Kant, but also with the Platonic tradition in classical thought,
whereby it is the privilege of reason to control and direct the otherwise unruly
passions. Neither of these positions is merely historical, and they form the two
poles around which a great deal of contemporary theorizing revolves. They are
however only unfortunately presented as polar opposites: Kant, at least in his
later work, pays cautious attention to aﬀective states (Kant 1996/1803), while
Hume constantly emphasizes the role of cognition and reason in isolating and
distinguishing the entangled morally relevant features of human actions and
human characters (Hume 2006/1751). Furthermore, Hume himself does not
work with the categories of “aﬀect” or “emotion.” His preferred term is the
“passions,” and he includes a large spread of diﬀerent states: “desire, aversion,
grief, joy, hope, fear, despair and security,” not to mention indirect passions,
which include “pride, humility, ambition, vanity, love, hatred, envy, pity, malice,
generosity, with their dependants” (Hume 1978/1739: 276–7). He is also clear that
there are “calm passions” that are not conscious, but more like dispositional or
functional states of whose existence we may not be immediately aware. Finding,
therefore, that someone may be morally motivated, or inclined to judge a situation, without visible activity in parts of the brain that become excited when
heart-pounding emotions such as fear or anger assail us, is not likely to touch
This introduces two important aspects of research into the neural substrates of
our psychologies. One is that we can take up positions towards the world –
stances and dispositions towards actual and potential doings, for example –
without any salient phenomenology. Just as I can build a fence in an unemotional
and clinical frame of mind, so I can forbid a course of action or intend to
enforce a policy. Our ethics may bubble up emotionally on enough occasions,
when sympathy and sorrow, or anger and indignation, or shame and guilt make
themselves felt. But it is still there in our cold, calm dispositions and intentions.
It is sometimes argued that if a Humean has such an expanded conception of the
passions, the view that motivations in general and ethics in particular require
passionate engagement with the world becomes a mere tautology, reducing to the
triviality that only motives motivate (Nagel 1970). But this is overly pessimistic.
In fact dispositions – including attitudes, resolution, and intentions – serve as
intervening variables in psychology in exactly the same way that theoretical
counterparts such as forces and ﬁelds do in physics. They are known by their
functions and eﬀects, certainly, but they stand at the service of an indeﬁnite
number of predictions and ways of dealing with systems. Hume’s insight was
that an ethical commitment bears greater resemblance to a practical stance or
disposition than it does to more ordinary empirical judgments. Ethics is directly
about motivation and practice in a way that empirical judgments are not (Smith
1994; Blackburn 1998).
The other moral to take from this is that brain writing is no easier to interpret
than human writing and human behavior, and indeed needs calibrating against
the overt practices of people. So if, for instance, an area of the brain associated
with problem solving is shown to be active when we work out whether to accept
something as subtle as the doctrine of double eﬀect, this conﬁrms that there is
indeed working out going on, but it does not tell us whether we are working out
a plan or policy, or working out a plain matter of fact, or exercising imagination,
or even oﬄine rehearsals of what to feel, or doing something more like a mathematical problem or a crossword puzzle solution. That has to be shown by the
resultant behavior, which may include many social expressions, such as writing
the distinction into criminal laws, embodying it in soap operas and morality tales
for the young, or simply tending to shun more completely those who intend
ETHICS, SCIENCE, AND RELIGION
harm as opposed to those who put up with it. It is in the light of all these activities that candidates for the neural substrate of morality have to be interpreted.
Even gross neurological disorders, such as that suﬀered by the unfortunate ventromedial prefrontal cortex patient Phineas Gage, need interpreting as deﬁcits of
aﬀect, or of sense of self, or of imagination, or as admixtures of all of these
(Damasio et al. 1991; Damasio 1994; Gerrans 2008). The moral is that there is no
short cut to exercising interpretative caution; as with medicine, the new tools
and shiny modern machinery need human understanding adjoined to them if
they are to tell us the things we want to know.
The way forward
The reader may reasonably worry whether we have talked ethics into an impasse.
If religious authority does nothing to underwrite it, as the Euthyphro dilemma
suggests, and if science itself comes up against the is/ought gap, then the subject
may seem to be not so much elusive as chimerical, an exercise of the mind
founded on nothing more than an illusion. The old ghosts of nihilism and relativism walk in the darkness. But these fears are groundless. What metaphysics
and even physics and its satellite sciences cannot do for us, we can do for ourselves. It is we who bring our aﬀective natures into play when we take up practical stances, and it is we who have to voice our own attitudes and policies,
prohibitions and insistencies, as we conduct our lives together. What other
sources of authority cannot do for us, we can do for ourselves.
This may seem paradoxical. When thinking about religious authority, we saw
that the will of a divine lawgiver seemed an insuﬃcient source for the authority
we need in ethics. We could not conceive how the inescapable nature of obligation and duty, or the intransigent demands of justice could result from the
arbitrary preferences of any being, however supernatural. So how can we now
turn around and say that what even the gods cannot do, we can do for ourselves?
Is it not a royal road to the most corrosive collapse of values to think that ethics
is nothing but the “banner of the questing will” (Murdoch 1967)?
The answer is to reﬂect on what is meant by insisting that it is we ourselves
who value things and insist on things. We need to do so to enable social life to
go on, with its distribution of burdens and beneﬁts, its contracts, promises,
rights, government, and laws. We had better make the adjustments carefully, or
we risk injustice and we court the resentments and instabilities to which injustices give rise. We are in the domain of practice, but that is not to say that the
practice is easy, that our ﬁrst thoughts will be our best, or that the long experience of human aﬀairs going well or badly has nothing to teach us. Our natures
determine many of our likes and dislikes, desires and needs. They also determine what will work in human aﬀairs and what will not. Hence our ethics has
indeed to be founded on as much knowledge of human experience as we can
muster, and that includes the interpretative sciences of history, law, anthropology, and literature as well as those of psychology and surrounding disciplines.
Fortunately, we know a great deal, and nearly all of what we know is open to
view. The qualities of mind and character that help us to succeed in cooperative
endeavors are named with words of admiration and praise; others are talked of
with dislike or contempt. One of the things we all know, and one of the early
things the child in any society learns, is which are which. Ethics is therefore
inescapable, and while nihilism and relativism may be specters that haunt theorists in the study, they disappear in the cold light of day.
Berman, David (1990) A History of Atheism in Britain: From Hobbes to Russell, London: Routledge.
Blackburn, S. (1998) Ruling Passions, Oxford: Oxford University Press.
Damasio, A. R. (1994) Descartes’ Error: Emotion, Reason and the Human Brain, New York:
Damasio, A. R., Tranel, D. and Damasio, H. (1991) “Somatic Markers and the Guidance of
Behaviour: Theory and Preliminary Testing,” in H. S. Levin, H. M. Eisenberg and
A. L. Benton (eds) Frontal Lobe Function and Dysfunction, New York: Oxford University
Press, pp. 217–29.
Dawkins, R. (2006) The God Delusion, Boston, MA: Houghton Miﬄin.
Durkheim, E. (2001/1912) The Elementary Forms of Religious Life, trans. Carol Cosman, Oxford:
Oxford University Press.
Gerrans, P. (2008) “Mental Time Travel, Somatic Markers and ‘Myopia for the Future’,” Philip
Greene, J. D. (2008) “The Secret Joke of Kant’s Soul,” in Sinnott-Armstrong 2008, vol. 3, 35–117.
Hauser, M. (2006) Moral Minds: How Nature Designed a Universal Sense of Right and Wrong,
New York: HarperCollins.
Hume, D. (1978/1739) A Treatise of Human Nature, ed. L. A. Selby-Bigge and P. Nidditch,
Oxford: Oxford University Press.
——(1999/1748) An Enquiry Concerning Human Understanding, ed. Tom L. Beauchamp,
Oxford: Oxford University Press.
——(2006/1751) An Enquiry Concerning the Principles of Morals, ed. T. Beauchamp, Oxford:
Oxford University Press.
Irons, W. (2001) “Religion as a Hard-to-Fake Sign of Commitment,” in Evolution and the
Capacity for Commitment, ed. R. M. Nesse, New York: Russell Sage Foundation, pp. 292–309.
Kant, I. (1996/1803) The Metaphysics of Morals, trans. Mary Gregor, Cambridge: Cambridge
Knobe, J. and Nichols, S. (2008) Experimental Philosophy, New York: Oxford University Press.
Murdoch, I. (1967) The Sovereignty of Good over Other Concepts, Cambridge: Cambridge
Nagel, T. (1970) The Possibility of Altruism, Oxford: Clarendon Press.
Sinnott-Armstrong, W. (ed.) (2008) Moral Psychology, vols 1–3, Cambridge, MA: MIT Press.
Smith, M. (1994) The Moral Problem, Oxford: Blackwell.
We commonly take ourselves and each other to be morally responsible agents. We
sometimes blame someone for doing something he should not have done, or praise
someone for exemplary behavior. It is generally thought that we are responsible
for such things only if we freely do them (or things resulting in them). Two fundamental philosophical questions that arise here are: What is the nature of moral
responsibility, and what kind of freedom does it require? Only with answers to
these questions can we settle whether we are in fact morally responsible.
Before turning to these questions, it will sharpen our focus to distinguish the topic
here from some related things about which we may talk using the words “responsibility” or “responsible.” For example, we might say that it is Sue’s responsibility to
feed the cat, or that she is responsible for seeing to it that the cat is fed. In this case,
we would be saying that Sue has a certain obligation or duty, perhaps one that she
acquired by promising to take care of the cat. This type of responsibility is often
called prospective responsibility. If Sue is someone who takes her obligations seriously and generally carries them out, we might say that she is a responsible person.
But now suppose that, although Sue generally does what she ought to do, and
although she has an obligation in this case to feed the cat, she does not in fact do so.
We might then blame her for the cat’s going hungry. In attributing blame, we would
be ﬁnding Sue responsible in the sense at issue in this chapter. If Sue is someone
who can be responsible in this sense, then she is, in this sense, a responsible
agent. This type of responsibility is often called retrospective responsibility.
Retrospective responsibility can be moral or legal. Someone might be morally
responsible for something for which, given the existing laws and legal institutions, he is not legally responsible. The oﬀense committed might not be suﬃciently important to be considered by the law, and no legal responsibility
attaches to ordinary good deeds. One might still be morally praiseworthy for
lending one’s neighbor a hand with some yard work.
It is, then, retrospective moral responsibility that is our topic. What exactly is
it to be responsible for something in this sense?
The nature of moral responsibility
Philosophers have oﬀered several diﬀerent conceptions of moral responsibility.
One core notion has it that responsibility is attributability: you are morally
responsible for something just in case it is attributable to you as a basis for
moral assessment of you (Scanlon 1998: Ch. 6; Watson 2004). The appropriate
judgment might be positive (you have acted heroically), negative (you were
thoughtless), or neutral, depending on the character of what you did. Writers
holding this conception sometimes maintain that our responsibility can extend
beyond our actions to our thoughts, feelings, and even failures to think of things
(Adams 1985; Scanlon 1998: 21–2; A. Smith 2005). On this view, one can be
blameworthy for feeling envious, or having a racist belief, or forgetting a friend’s
birthday, regardless of whether these things result from, or are inﬂuencable by,
one’s actions, for they may nevertheless disclose moral faults.
A somewhat similar conception is that of appraisability: when one is morally
responsible for something, there is a mark – positive, negative, or neutral – on
one’s moral ledger (Zimmerman 1988: Ch. 3). To be culpable for something is
for there to be a debit or blemish on one’s moral record for that thing; to be
laudable is for there to be a credit or luster on one’s record. This view diﬀers
from the former in holding that, of the variety of moral assessments of agents,
only a narrow range are ascriptions of moral responsibility. For example, on this
conception, one might be reprehensible for having a bad desire or character trait,
or admirable for having a good one, without being culpable or laudable – and so
without being responsible – for these things.
Some writers take attributability to include answerability: when one is morally
responsible for something, one is answerable for that thing (Scanlon 1998: 268).
On this view, moral criticism of an individual calls on that person to justify his
behavior and, if it is not justiﬁable, to acknowledge wrongdoing.
When we address such a demand to someone, we are holding that person
responsible. We might reproach him for having acted wrongly, and we might
express anger or insist on an apology. Sometimes we impose sanctions on a
wrongdoer, giving him the cold shoulder or withholding some favor. In the case
of praiseworthy action, we might express gratitude or oﬀer a token of thanks or
a reward to the meritorious agent.
A conception of responsibility as accountability ties it to the appropriateness of
responses of these types (Scanlon 1998: 248–67; Watson 2004; Zimmerman 1988:
Ch. 5). On this view, one’s responsibility for something can permit or require
others to administer sanctions or oﬀer rewards (depending on what one has
done). It might be said that one who is blameworthy has no grounds for complaint about being treated harshly, or that it is fair that he be punished, or that
he deserves to suﬀer.
Often when we take someone to be responsible for something, we have some
type of emotion-laden attitude toward that person. We might be resentful if he