Should you kill one person in order to save five?
Whatever your opinion, your judgment shouldn't be swayed by whether the question is posed in your native language or in a foreign language you have mastered. But that is exactly what an international team of cognitive psychologists led by Albert Costa in Barcelona claim they have discovered: People using a foreign language are far more likely to sacrifice an innocent for the sake of many. And they think they understand why: Our mother tongue is laden with emotion, feeling, association.
When we use our first language, we have a strong intuitive emotional response to the plight of an innocent person condemned to be sacrificed without regard to his needs, feelings, family or situation. Languages learned later in life carry no such emotional load, and so the use of such a language is conducive to dispassionate calculation and purely utilitarian consideration.
It is true that it's easier to keep distance, emotionally, in a language other than your own. Any one with experience in this area will know this first hand. (I have never spoken to my father in his native language. My daughter has never spoken to me in mine.)
This might come as a surprise if you supposed that a language is, as it were, just a code, and that knowing a language is just having the key. Why should it matter which code you use provided you know the code? This is essentially the standpoint of the study in question: Your thoughts about moral dilemma should not be "swayed" by which language you are using "so long as you understand the problem."
But languages are not codes. They are forms of life. They are ways of being and ways of feeling. They are complex, spread-out, directly involved habits governing many different aspects of our lives and thought and feeling.
But this means that if it is really true that we come to different conclusions about matters of real importance when we use different languages, and if this stems from differences in our emotional responsiveness when using a language other than our own, then this very fact, all on its own, shows that there are differences in our understanding.
In other words, for conceptual reasons, the study's conclusion undermines itself. You can't factor out understanding as you vary emotional meaning for these two are tied together.
But there is a bigger problem that infects not only this study, but so much of the experimental work in this field.
The reason we speak of "moral dilemmas" is that when it comes to questions such as whether we should kill one to save many we are confronted by competing values.
Is the best course of action the one that "maximizes the greatest good for the greatest number"? Do ends always justify means? Or are there limits on what is permissible? Do people have rights that must be respected? An intrinsic worth than can't simply be added up? And what about the importance of our attachments? What about love? Philosophers have explored and illuminated this space of values, ideas and conflicts for eons.
The upshot is that there are no algorithms, and can be no algorithms, that let us regiment these different competing values and lines of reasoning. This is why we should not speak here of moral responses, as if one's more opinion were a kind of reflex action or gut feeling. It isn't a moral response unless it is a judgment, and it isn't a judgment unless it is sensitive to reasons, arguments, counterarguments, examples, as well as emotion and feeling. That is, if it isn't sensitive to the fact that these are hard choices, real dilemmas, where any judgment call you make is just that, a judgment call about which there can be well-meaning and intelligent disagreement.
Which brings us back to the studies in question. Respondents are asked to check boxes on questionnaires about what "they would do," or what the "right thing to do is." I submit that such box-checks are not expressive of moral reasoning, or moral judgment. They bring us no closer to a psychological or philosophical punchline than "who's there?" does. They don't so much tell us what people think as they set the stage for a conversation which might yield that information.
This paper is one of the zillion published in the last few years that aim to "debunk" our understanding of ourselves as agents and thinkers. As the paper begins:
People often believe that moral judgments about "right" and "wrong" are the result of deep, thoughtful principles and should therefore be consistent and unaffected by irrelevant aspects of a moral dilemma. For instance, as long as one understands a moral dilemma, its resolution should not depend on whether it is presented in a native language or in a foreign language. Here we report evidence that people tend to make systematically different judgments when they face a moral dilemma in a foreign language than in their native language.
I deny that the study supports this conclusion. Box-ticking behavior is not evidence of moral judgment. The study does not even show that our principles and judgments are affected by irrelevant aspects of the moral dilemma (e.g., the language it is couched in). For no one in their right mind would give as reason for his or her choice the fact that one or another language was used.
This study does not support its main conclusion that morals depends on language.
What it does do is provide startling evidence that experimental method, precision and careful statistics are of little value if they operate in a conceptual void. Garbage in. Garbage out.