An Examination of Moral Relativism (and Nihilism)

I recently did some reading and listening on the subject of moral relativism (and relativism in general) and would like to record some of what I came across for future reference, possibly for discussion with the group. I’ve formulated these criticisms informed by my own beliefs, and also in light of what I’ve heard and read that confirms my view on the theory, that relativism is not practically workable, running against the very nature of human thought, and that the arguments in favor of it are deeply flawed.

First, let’s define the terms. General relativism (or so I’ll call it) makes the claim “all truths are relative” although it omits its qualifier, in which case it needs to read “all truths are relative (except this one)” making an exclusive claim to truth while denying all others in contradiction. This commits the fallacy of conflicting conditions. It is internally contradictory, like saying “this statement is false.” Relativism as a general concept is by and large considered untenable.

It is possible to maintain moral relativism, but not general relativism; it is possible to say “right and wrong are relative” as an absolute statement. In other words, when someone makes an ethical or moral statement such as “killing the innocent is wrong,” either that statement is only correct within the theoretical framework in which it is uttered (the society’s particular moral code, etc.), or it is actually an expression of the person’s belief/preference/feeling: “I don’t like killing” or “I don’t want you to kill” (closer to moral nihilism–the belief that “right and wrong” do not exist in any sense). The strength of this position is that it does not fall into the trap of total relativism. But what are its problems?

To begin with, let’s look at the reason people believe this theory in the first place. The most common argument for moral relativism is the argument from moral diversity. This argument observes, rightly, that at different times and in different cultures, people have different moral values, and then concludes that morality must therefore be relative. Some critics observe that morality is actually not different enough to justify the extreme claims of moral relativism (C.S. Lewis refers to the common morality shared among all cultures at all times the tao, in his book The Abolition of Man.) But regardless of the objections, this argument fails because it makes a category error by exchanging the descriptive for the normative. Observed moral relativism is an “is” claim (descriptive), while morality deals with “ought” claims (normative)–the way things ought to be, not the way they are. It does not necessarily follow from what is what ought to be. Similar to this argument would be to say: many people solve a mathematical equation in many different ways, getting different answers–therefore, there must be no correct answer, and math itself must be relative! Clearly, this is not necessarily so. And unless somehow proven otherwise, it remains a possibility that some people have their morality wrong or less correct, while some people have their morality right or more correct–a theory which accords with the phenomenon of moral diversity just as well as relativism. Furthermore, if we are to follow the reasoning of moving from descriptive to normative, then we must draw the same conclusion that science is relative, as is morality, since scientific knowledge and practice has differed so widely across time and among cultures–more so than morality, if you think about it. Yet we don’t hold this view; we believe that the science we have is the right science, or at least the most fully developed, the most correct that we’ve been able to come thus far. We are Hegelians when it comes to science, but postmoderns when it comes to ethics.

It is also curious that moral language began with and developed using absolute terms. Wouldn’t we only have developed the language of personal preference if that’s all that ethics was? Of course, this doesn’t prove that moral relativism is false, only that it seems strange, or unlikely.

But to go further, I would argue that moral relativism breaks its own rules in the same way that general relativism does. Moral relativism (and nihilism), in going beyond the descriptive, denote a truth about ethical statements. Remember, these positions argue that moral statements such as “killing is wrong” are either only true within their theoretical framework (relativism), or actually expressions of personal preference with no “truth” value, societal or otherwise (nihilism). To make this work, they maintain that meta-ethical statements, which they are, are true, but that ethical statements themselves are relative or not meaningful at all. In other words, we can talk in concrete terms about morality, but moral statements themselves are relative. But this raises a couple of questions: first, how do we know that statements of ethics are preferential, but that those of meta-ethics are truthful? How is it that statements about morality actually correspond to reality, whereas statements of morality only correspond to a theoretical framework or personal preference? I’m not sure that there is a coherent reason for drawing the Line of Relativism at ethical statements, but not at meta-ethical statements, or scientific statements, or any other sorts of statements–other than by appealing to some other philosophical position (humanism, or atheism, perhaps?). If you draw the line too far forward (more than just morality is relative), then you tend toward general relativism, and you are in danger of a self-refuting system. If you draw the line too far back (there are absolute truths), then you are no longer really a relativist, and the burden falls on you to show why it is that morality is relative and other things are not. Second question: how and why do we distinguish between the ethical and the meta-ethical in the first place? How are statements about morality and of morality any different to begin with? I would argue that this distinction is not justified, and that in fact there is cause to believe the opposite–that statements about morality and of morality are actually of the same kind. To say that morality is relative is to make the statement that there are no absolutely necessary moral obligations, which is to say, that that is the morality (or lack thereof). Therefore, saying that morality is relative or does not exist is to make a moral claim that asserts that it exists and is not relative–saying, essentially, and in self-defeating contradiction, “all absolute moral claims are false (except this one).” Or maybe even better, “everything about morality is relative (except that it is relative).”

The support seems to me weak at best, deeply flawed at worst, and the theories themselves may contain irresolvable contradictions. Now, let’s examine the application of the theories. What are potential problems with moral relativism’s and nihilism’s outworkings, supposing them to be true?

First, a lot of people contradictorily extract a concrete ethics out of the view that ethics are relative. People say, “right and wrong are relative, therefore we should be tolerant” not realizing that they’ve just made “tolerance” an absolute virtue–making right and wrong not totally relative (and also making the same category error of moving from descriptive to normative). This is one of the key problems I see with humanism in its contemporary form.

(There are similar problems in arguments against religion. Some people argue that no religion can claim to have the whole truth, that it is presumptuous or arrogant to claim a perspective of superiority over others. But isn’t this argument itself a superior claim about religious truth of the same sort–indicating that its speaker does, in fact, have the perspective that supposedly can’t be had or shouldn’t be claimed?)

But if one manages to avoid this trap, there are others that reveal a deep incoherence in the theory. Some moral relativists say, “I have my ideas of what is right and wrong; others do too, and they are welcome to it.” But the problem is that ideas about right and wrong necessarily extend beyond the self. “I think that people ought not to murder, but it’s all relative”–yet to say that “people ought not” implies a standard that applies to more than yourself, indicating externality, and thus a nature concrete, not relative. (Or else, how is it that your personal standard can meaningfully apply to other people?)

One may avoid any sense of corporate, external morality by saying only “I ought not to murder” or “it would be wrong for me to murder” but then this still suggests at least two working pieces: a code, framework, or set of guidelines, and the self upon which these guidelines are imposed. This code can only come from either inside or outside of the self. If guidelines are imposed by the self, then the question is: how do we come up with these “oughts” and how do they meaningfully guide behavior? Are there two selves, or else how are one’s “oughts” different from moment-to-moment impulses, or different from what one actually ends up doing? You’re making up your own “oughts.” You can only say “I ought to do this thing” because you mean that you want to, or you’ve decided you will do it, and then you do it. But then you are making a distinction where none exists only to satisfy the “ought” language–and justifying your decisions and actions using circular reasoning, when there is not even a need to justify those actions in the first place!

But this is contradictory and incoherent, and no one thinks this way. We all think and act in opposition to relativism–in terms of two working pieces, ourselves and something else concrete. For example, a relativist might say that our choices are relative to what consequences we find desirable or undesirable, but how do we decide what is desirable or undesirable? Even if what you find desirable or undesirable is relative for you, you treat it as though it were concrete. You make the decision for what would be desirable on the assumption that it would be desirable no matter what you choose. What decision you make depends on what is desirable, not the other way around. If you should unexpectedly obtain an undesirable result instead of a desirable one, or discover that what you thought would be desirable is really undesirable, you then think that you had made the wrong decision–not that your appraisal of the result needs to change. This all implies permanence and externality of that desirability or undesirability, because it does not change depending on you. In resolving not to make the same mistake again, or in some other way to integrate the experience into your decision-making process, you admit that your judgment is subject to error, and the only way for this to be possible is if what you judged is external to your faculties of judgment. This is contrary to the principles of relativism. What you don’t do–and no one ever does–but what relativism demands, is to change the interpretation of the result based on the decision: you don’t say “I’m sure glad I made that decision that got this undesirable result; I’ve learned that this result is actually desirable because it’s what my decision led to!” (“I’m glad I decided buy that car on a whim from the salesman; now I know that getting cheated and losing money is good!”) Again, no one does this, but this is actually what relativism dictates should happen, because if desirability or undesirability is relative, then we have no justification for interpreting the results of our choices apart from the choices themselves–and for our choices, we have nothing other than the fact that we made them. If we thought it was right at the time, then it was right!

So moral relativism entirely undermines the idea of self-improvement, since there is no way to know or say that we did something wrong, and that we “ought” to be “better” if whatever we did is what we ought to have done. The selection can be nothing other than arbitrary or capricious. This is even more problematic on a societal scale: how are we to improve society without pointing to certain ethical codes as better or worse than others? For instance, on what basis did our society depart from its endorsement of slavery other than to say that we had had it wrong the whole time, not that we had simply, arbitrarily changed our feelings about it now? Maybe society exchanges one moral framework for another–but on what basis? Doesn’t it have to appeal to a higher, more permanent framework to justify the exchange? On the other hand, if relativism is true, the only true appeal to a certain ethics is through force. A leader can’t say “murder is wrong” meaningfully to his citizens except as a lie, justifying his convictions with ethos beyond his own; he can only truly say “you better not murder or I’ll make sure something bad happens to you” (assuming that “something bad” is something actually bad, and not just relative…)

If true ideas about right and wrong don’t exist, then what is the meaning of having ideas about right and wrong in the first place? It’s like having very strong opinions about who Santa Claus is, what he is like, but without believing that he actually exists. You might believe that Santa Claus has pointy ears–you may believe this very strongly–but then, what is the point of this, if Santa Claus isn’t real? Why do you bother with this belief in the first place?

If you are a true relativist, I would argue, you are actually a closet nihilist, and you must give up your personal beliefs of right and wrong altogether in order to be consistent. But does this work? This leads me to my next point, which is that the difference between belief in Santa Claus and belief in morality is of course that one is an integral part of the human experience and the other is not. So I’m going to apply this analogy even where it breaks down. If we discover that arguing about whether Santa has pointy ears or not is meaningless because he doesn’t actually exist, that’s fine–we can stop worrying about whether or not he has pointy ears altogether–it doesn’t affect our lives. But if we decide that morality doesn’t exist, it still affects our lives; there is no escape from it. Unlike choosing to disbelieve in Santa, deciding whether morality exists is more like deciding to breathe air or not–it has consequences either way. Keep breathing air: you live. Stop breathing air: you die. Everyone makes choices. What one believes about the nature of morality influences the decisions one makes. Therefore, beliefs about nature and morality influence everyone. Whichever one you choose, it matters. There is no way for it not to matter.

The practical result of moral nihilism–to dismiss the idea of the “ought” altogether–is to live without a framework for making decisions at all, conforming to any old whim that strikes us, living reflexively. But this is not possible. If we decide to live reflexively, to be in accordance with our nihilistic beliefs, then we have constructed a moral framework (we “ought” to live reflexively), and we are not a nihilist.

Even if, theoretically, we could avoid this, moral nihilism also has the problem of relativism’s self-justification, but with a twist: if there is no right or wrong, there is no justification for analyzing the result of a choice, although under nihilism one cannot even evaluate the result of a choice by virtue of the choice itself.

And most importantly, moral nihilism begins to undermine reason itself. Reasoning is a series of “oughts”–rules of logic and inference. This “ought” to follow from that. With no “oughts,” there is no reason. If we object to this, saying that moral “oughts” don’t exist but “oughts” of reason and logic do, then we fall into the same error as the relativist: drawing a Line of Nihilism arbitrarily between the questionable distinction between ethics and meta-ethics. What is the difference between “oughts” of morality and “oughts” of logic? And we contradict ourselves in saying “there are no moral truths (except this one).” So I don’t think it’s a slippery slope to say that not only are moral relativism and nihilism self-contradictory and contrary to the human experience, but they threaten to (and by necessity must) undermine much more than just morality.

Thus, the theories (moral relativism and moral nihilism) seem highly unlikely, especially considering that there are other competing moral frameworks more congruous to human experience, thought, and life (i.e., Aristotle’s telos, divine purpose/Christian virtue, human rights theory, etc.). Nonetheless, relativism and nihilism remain attractive positions. The theories seem to arise from the sort of naturalistic atheism which, although subject to much criticism itself, is by and large the norm in our cultural place and time. They appeal to people who disbelieve in God, the supernatural, and the metaphysical, and who hold scientistic beliefs that dismiss the idea of anything “innate” in humanity other than biological processes (and to people who want to do whatever they want to do without having to think deeply and seriously, and who want to avoid the pain of moral correction). But moral relativism and nihilism have serious problems. Their philosophical justification is questionable at best and they are incongruous with how we as humans think and live.



Four of us total today. This week we took a more informal approach to conversation, first beginning by sharing stories about arguments that had gone sour on Facebook–often ending with one of the parties admitting that he or she had not read the article or viewed the video in the original post. We lamented, as e have done before, that people tend to take honest argument as aggressive or condescending (or both). Granted, sometimes in the presentation of an argument, an arguer is aggressive or condescending (or both), but still sometimes honest argument is wanted, but not received in its intended spirit. These arguments can ruin friendships, or sour family relationships.

It is difficult to remain open to arguments when we hold strong opinions and when whoever is presenting a counterargument lacks authority. Sometimes, even if we lose an argument, we are tempted to hold our ground even so. There is much more at work in argument than argument.

I’m not sure how we transitioned to the next subject, but we began talking about drugs, and how prevalent they are among teenagers, and how easy it would be to do them. We have all had experiences or seen movies or been raised to avoid drugs (usually a combination of these). One’s ability to resist drugs might come down to society and expectations. If we are around someone whom we respect, in a broad sense, who is doing drugs and offers them to us, we might be more tempted than we would if our friends and family were drug-free and the person offering it to us were a stranger. I mentioned Requiem For a Dream and Trainspotting–two films about the ill effects of drug use and addiction, presenting as good arguments as anything else against the use of drugs.

Someone brought up a particular method for understanding political preferences by way of critique–a spectrum from liberalism to conservatism with communism on one end and nazism on the other. The Nazis were National Socialists, so wouldn’t they be closer to the communism/socialism end of the spectrum? We agreed that the X-Y axis method of mapping political preferences was superior to the two-dimensional spectrum.

Similarly to the simplification of politics, history texts and presentations are often watered down. Specifically, we mentioned Martin Luther King, Jr. assemblies, which change the core message (“I have a dream” becomes “follow your dreams,” or “I love myself!”).

Somehow or other, we got to talking about religions, mentioning that there are several that compete for the title of “Christian”–i.e., Catholics, Mormons, Jehovah’s Witnesses, etc.–but that, arguably, are not properly called Christian. The Mormons, for example, have an entire additional sacred text (the Book of Mormon), the most obvious point of difference, in addition to doctrines about the afterlife (do they really believe that you become a God of your own planet, and create “spirit babies”?).

We touched on many topics today but didn’t really get deep into anything. Potential future topics: the underlying biases in public school education, and comparative religion!

Is Education Social Engineering?

The three usual suspects. Today our question was: is education social engineering? We actually answered this question fairly quickly. First, we defined social engineering as any attempt by a person or group in power to manipulate other people and shape society, and then concluded that public education fits the bill. We spoke less argumentatively this session and more sociologically and speculatively.

We observed that there are many implicit (and sometimes explicit) values taught at school, and “caught” from the overall student culture. Cliques and groups are a reality, and so are their power structures–there are leaders and followers within these groups.

Our conversation turned towards discussing the major problems with school in general, other than that it is social engineering. One of the biggest problems we identified is the lack of freedom. Compulsion automatically decreases the student’s interest, investment, and sense of value. This is contrary to one of the goals of school: to impart a cultural ideal or norm to students, namely the academic tradition. Now that I’m thinking of it, this probably comes from one of school’s original purposes which was to develop and encourage a foundation for active and responsible citizenship. We mentioned Thoreau’s “On the Duty of Civil Disobedience,” in which he states that compulsion is contrary to the foundational principles of the United States, and that it discourages personal investment. His examples are of the use of money to purchase goods–it discourages a personal appreciation of that good–but I think by extension we could include education as well.

This led to a broader discussion about government welfare. Is it the government’s place to take the citizen’s money and determine where it goes, even if it is for the general welfare? Ideally, this should be up to the individual citizen. However, that takes a high amount of trust in the individual to spend money responsibly–an ideal that is not often met in reality. People are selfish with money. However, this does not justify the government to take the money and decide where it goes. Additionally, people on welfare seem unappreciative of what they are given, because they don’t have to work for it. It seemed more apropos to us that the government or other charitable agency provide services to people that enabled them to get back on their own feet–drug rehabilitation programs, job training, etc.–rather than handouts such as food stamps that people have come to rely on almost as “givens.”

Returning to our discussion of school, we observed that many teaching strategies are also manipulation tactics. We talked about how students move from one teacher to the next at least once every year (in most school settings), which prevents them from forming lasting attachments to adult teachers/mentors, and seems to teach them that those relationships are not as important as those with their peers. Perhaps this is why students seem apathetic or even resentful (and sometimes downright aggressive) toward teachers once they reach high school. And because the teacher-student relationship is often reduced to classwork and the resulting grades, a heavy emphasis falls on work and achievement. Students take grades and feedback personally. This sort of heavy personal and emotional emphasis on grades stigmatizes failure. But the educational system progresses people to higher levels of new and more difficult skills–and how often does anyone do something right the first time they do it? I argued that failure is normal and inevitable, given the introduction of new and more difficult skills, and that it should therefore be expected and embraced. No amount of pep-talking or proselytizing in class about how improvement is really what matters or how grades are only part of the story will change the emphasis that a grade has when that is what all work comes down to: a printed report card, a GPA, a transcript, a college application. Of those who do not develop a highly emotional attachment to their grades, many go the other direction and become apathetic. We wondered why apathetic students should be compelled to come to school, when they make it difficult for the students who want to be at school and for teachers who want to contribute to the learning of their students. We had all experienced classes with one or a handful of disillusioned students disrupting the educational process, hindering the learning of all students and creating otherwise needless difficulty for the instructor. Is there a better way?

We questioned the arbitrary selection of skills emphasized by schools. Standards seem only to compound this problem. We noted that elementary standards demand skills that children are not ready to perform cognitively. At the high school level, many required classes seem out of step with the career paths that students end up taking, especially technical careers. High school mathematics seem geared towards engineering and scientific professions, and do not emphasize practical skills such as might apply to taxes or insurance. English is focused primarily on literary study rather than career writing, and seems inappropriate for those not going into highly academic professions. Furthermore, the way these studies are taught seems to disconnect them from real life. History, for example, always bored me in school, because I never understood the connections to my own life or what was happening around me. Its presentation was so dry. But now, I love learning about history, especially the philosophical underpinnings of different historical movements and time periods, and understanding how our current thought is influenced by what people long ago thought, wrote, and did. Perhaps in its attempt to be “objective,” school also ends up cutting out aspects of subjects that generate personal interest.

Again, we seemed to see that systematic or institutional education kills a true desire to learn. The emphasis on arbitrary skills, measured by grades, creates a disconnect for the student between what he is learning and how it guides his destiny. He is not assessed on whether he learns the material; he is assessed on how he assesses on the material, and he is not trusted to judge for himself whether he has learned something or not. We seem to take this for granted that this is how education should work, but it seems more obvious that testing only demonstrates how a student performs on a test and often little else. It is only through a leap of faith that we can say that test results quantify actual student knowledge/skills/learning. This is a discussion that seems to be missing from the charter schools debate. People say that charter schools perform poorly. But there are two important points: who are the students who are usually sent to charter schools? The ones that are performing poorly in regular schools? (If so, perhaps that is why charter schools perform more poorly. What is the cause/effect relationship between charter schools and their performance?) Also, how is the performance of charter schools measured? By tests? (Are the students going to charter schools students who don’t test well? Again, what is the cause/effect?)

We discussed radical alternative methods of education, including the School Without Walls. How might it look if students were not, in fact, required to attend school, but rather attended out of their own free will, and directing their own course of study? Teachers would act as facilitators–rather than planning lessons and implementing crowd-management and manipulation tactics, would provide knowledge, insight, and feedback as requested. These sorts of radical changes are slow to materialize because of the massive size of education as an institution. While all of this structure and legislation is well-intentioned, it also hinders changes from taking place–changes that might be radical, but that might also drastically change and improve the state and nature of education.

Our last comments were toward the changing world of post-high school expectations. Many parents still emphasize the importance of college, but the career world may be moving away from that. More and more people are getting a college education, making a college degree a more common commodity, and therefore less of a distinguishing factor on an application or resume. Also, many fields, especially the technical ones, value experience and references just as much if not more than study work. I noted that of all the vastly numerous times I have submitted something for publication, I am asked about my education never, but about any previous publications always. So even in non-technical fields, a track record of experience and previous success may be worth more than any amount of education.

For further reading:

Fortress of Tedium (in part about the School Without Walls)

Normal Accident Theory

The Death Penalty: Is There a Dispensable Mob Boss?

A small group, only three of us. We had no planned agenda. There was a question about Aristotle’s four causes, but I wanted to put off trying to talk about them until I had done some more research. So instead, we decided to discuss some ethical issue. On a whim, I suggested the death penalty. More specifically, I asked the question: is it ethical for a society to put to death one of its members, and if so, when?

To figure this out, we began by examining some of the common justifications for the death penalty. Turns out, the death penalty is cheaper than keeping someone alive for life in prison. However, the cheapest option is not always the “right” option. Lower cost does not justify something ethically.

We moved on to the use of the death penalty as a punishment. Is the death penalty a proper punishment for murder, and is it justified to carry out such punishment? (I separate these two questions because even if the death penalty is an appropriate punishment for murder, that does not necessarily justify a society to enact it.) “Punishment” seemed to suggest some different interpretations. One potential definition is that “punishment” is something that makes someone feel bad for their actions. Obviously, since the death penalty takes away one’s ability to feel entirely, it doesn’t really function to inspire guilt or create bad feelings for someone who has committed the action (other than, perhaps, their anguish in prison as they await the carrying out of their sentence–more like a torture technique).

Alternatively, it might be justified by its function as a deterrent: the threat of the impending punishment might be such that it causes an agent to alter his or her course of action. This had some issues. The strength of this particular threat, and thus its effectiveness as a deterrent, appears to come down to one’s personal beliefs about death. The more afraid one is of death, the more effective the threat of death should be as a deterrent. If you are afraid of going to hell, then you will not want to be sent there. However, if you believe that you will go to heaven, you might be less likely to fear death. Similarly, if you are an atheist and you view death as merely the cessation of biological activity, then you would have a lesser fear of death than if you were afraid you might go to hell. If you are a radical Islamist and you view the death penalty as a death in battle, or for a cause, you might even welcome it. Thus, its effectiveness as a deterrent is probably limited by a population’s beliefs about what happens after death.

The jury is still out on whether or not societies with the death penalty are statistically less likely to have murders. But even if the death penalty were shown to be an effective deterrent, would that necessarily justify it? More on that later…

Another potential function is as a settling of a score, a payment of a debt, a restoration of the balance of justice. But this also seemed inadequate. If I am a thief, and I steal money from someone, I can repay the debt. But if I am a murderer and I take someone’s life, I cannot give that life back. Furthermore, the taking of my life by capital punishment does not bring the scales back into balance, it simply adds to the body count. Capital punishment doesn’t seem to settle a score in the same way as other punishments or acts of retribution. (But even these have their problems. If I steal Steven’s car, I can return his car or pay him a sum of money equivalent to it–but if I steal Steven’s car on a day when Steven has a very important job interview, making him unable to go, I have not only robbed him of his car, but I have also robbed him of the opportunity to get a very important job. And there is no way to repay the debt that I owe him on that score, even if I return the car, or two cars, or three.)

Some would argue instead that it provides consolation for the victims, even if it is not a proper restoration of the balance of justice. Well, it might, but that isn’t necessarily an ethical justification. Just because something provides comfort or consolation, even if it did function as an effective restoration of the scales of justice, or as a deterrent, doesn’t mean it’s right. A common justification is “an eye for an eye”–you take a life, you lose your own. But we don’t do that for other crimes. We kill killers, but… we don’t rob robbers. We don’t rape rapists. How is it that we can say: killing is wrong for a citizen but okay for the government, while rape, for instance, is wrong for both the citizen and the government? Where does the double standard start and where does it end?

Human beings have a strong sense of justice, and the desire to see it done. Punishment can satisfy that desire. We watch TV programs like Game of Thrones and get wrapped up hating certain despicable characters, and then we feel catharsis when they are punished for their misdeeds. We all have a desire to see justice, almost to the point of bloodlust. But there is a problem. Steven used the example of a villain in Bad Boys II–an otherwise “disposable mob boss” who elicits no sympathy, but who has a daughter. When the villain is defeated, or killed (I don’t know, it’s been a long time since I’ve watched Bad Boys II), there is some lingering unease–what happens to the daughter? This raises the question if in real life there are any purely disposable people, like the deplorable characters in Game of Thrones, or if people always have mitigating factors that, if known, would make their death rest less easily with us, like the not-quite-disposable mob boss from Bad Boys II. Does everyone points of sympathy? As such, is it for us to say that they deserve death? We know that most murderers and likewise messed-up people usually have tortured pasts and histories, things that can make us sympathetic to their plight. At what point do we draw the line, and for what crimes, saying: we feel sorry for you, but not sorry enough that we won’t kill you–?

But this raises the broader question of to what extent anyone is responsible for their actions in general. We said that murderers and otherwise messed-up people almost always have some history of trauma, and this could potentially mitigate their responsibility for their actions in some way. But if that’s true, couldn’t it be argued for all of us? Aren’t we all, in not insignificant ways, the products of our upbringings and our backgrounds? And at what point do we stop saying “well, that’s just the way I was brought up” and start saying “yes, that was my fault”? Here it appears that we must either feel sorry for everyone or feel sorry for no one–unless we can identify a point at which someone is or is not responsible for certain actions.

This poses another problem though, because if someone’s action is often out of a person’s control, even more so is the result. For example, we discussed how murder and attempted murder are judged differently–but the problem is, the action was just the same, and the only reason that it turned out differently was likely due to some factor outside of the agent’s control: his lack of experience as a murderer, his clumsiness, accidents that conspired to keep his victim alive. We see fit to judge people on the results of their actions, which are just as much the results of accident as much as (or even more than) their intentions. (For more on this issue, see Thomas Nagel’s “Moral Luck”) Why do we judge someone differently for something that wasn’t their fault?

But the prospect of no longer saying that people are responsible for their actions does not seem like a viable option. (We all know people who shirk responsibility.) So perhaps we should judge someone by his intent, not by his actions. However, this is incredibly problematic for any serious system of justice in society. How do we know someone’s interior thoughts and intentions? “I didn’t mean to speed…”

We discussed negligence: when is not doing something doing something wrong? Here again, there seems to be a double standard. The law seems to indicate that some acts of neglect are reprehensible, while others are not. Neglecting to feed your child is illegal. Neglecting to intervene in a fight (which may lead to a murder) is not.

Somewhere in here, we also discussed the off-chance that a verdict is wrong. Is it excusable for a government to take the life of a citizen if there is a chance, no matter how slight, that its verdict was in error?

This was as far as we got. There is more to say on this issue, but so far we could not find an acceptable justification for the killing of another human being by a government unless it was preceded by a divine command–God ordering that the person should be killed. So many ethical issues seem to run afoul of the “moral luck” problem. And this I believe is where Aristotle’s ethics project led him, although instead of “moral luck” he might call it fate, or tragedy: no matter how virtuous a person’s character, intent, and actions, goodness may still be thwarted by factors outside of the control of human beings.

Next week: the four causes, and is institutional education social engineering?

Addicted to Love (and Apologetics Continued)

We picked up essentially where we left off, launching right into the arguments for the existence of God. Steven had done some outside research, looking into Aquinas’ arguments. The one he found most compelling was the first cause argument: that everything has a cause, but there must have been something at first without a cause. This led into a discussion of Aristotle’s four causes, since many of these arguments rest on Aristotelian metaphysics. Aristotle’s efficient cause is what we normally think of to be “cause” in our contemporary understanding, but there were three others that are also used to frame arguments for God. Our understanding of these arguments is necessarily impoverished because we don’t subscribe to these metaphysics. I will post some resources on these later, since our discussion mostly consisted of me trying (badly) to remember them, and not really understanding them sufficiently.

We then discussed the problem of evil: if God is all-good and all-powerful, why is there suffering and evil? This is a common objection to the existence of God (at least a good God), but my assertion was that this actually doesn’t function as an argument against the existence of God, since it necessarily assumes that God already exists in order to be meaningful. It is essentially an “interior” issue within the belief system, not an objection from outside of it. In any case, it is a problem, and there are a couple of responses. A common response is the free-will argument: God is not ultimately responsible for evil–human beings are. It was good for God to give humans free will, but humans misuse that free will, and therefore evil exists. There are a couple of problems with this argument. First, it fails to explain natural evil and suffering–natural disasters, disease, etc. Second, a potential loophole is the question, well wasn’t it up to God to give people free will? And in doing so, didn’t he know that people would commit evil, and therefore isn’t he ultimately still responsible for evil? There are some responses to this. Greg Boyd argues a position called “open theism” in which case God is said, in a way, not to know. This view is obviously problematic with a lot of what we see in scripture and with much theology about God’s omniscience and foreknowledge, but it is one potential response. A second response to the problem of evil is to identify a couple other problems with the argument itself, and those are that it assumes: 1) the allowance for suffering is not good, and 2) we as humans have a viewpoint comprehensive and objective enough to know that the universe would have been better had it been created without the inclusion of or potentiality for evil. Both of these assumptions are contestable. If there is a God, we are not that God, therefore we have no grounds on which to judge the act of creation, nor do we therefore have grounds on which to say that the inclusion of suffering in its design is ultimately an evil thing. The Bible itself (see the Book of Job) seems to say as much. This response is disconcerting–it does not easily satisfy the cognitive and emotional dissonance we feel over this issue. But I find it compelling, and see that it is sufficient as a rebuttal to the argument.

We left off here, and revisited the topic of artificial intelligence. The question this time was whether or not artificial intelligence could be said to be conscious, and whether or not it could be said to act deliberately. Aristotle separates human beings from the animals using this as one of his criteria: that human beings are deliberate agents. As such, it seemed to us that artificial intelligence does not meet this criteria. Whatever “deliberation” occurs is predetermined by its programming, and so artificial intelligence is simply following its programming to a predetermined end. The goal of artificial intelligence is ultimately the execution of its code. Human beings have a different sort of end or goal (telos in Aristotelian language), and seem to have a different level of awareness of the sensory data taken in and analyzed. Artificial intelligence only responds to stimulus; human beings deliberate.

The final topic of interest was that of romantic love, and whether an addiction to romantic love might qualify as a type of disorder, such as might be categorized in the DSM. First we had to delineate what constitutes romantic love versus other types of love (such as long-term relational love, the love of beauty, or a love of ice cream, for example), and we seemed to settle on “romantic love” as defined by the chemical responses in the brain that create the emotional “high” that someone gets at the onset of a new romantic relationship, for example. Then we determined that such a chemical response is not properly called an addiction yet–the chemicals in themselves don’t necessitate addiction. Addiction is a response to the chemicals, not the chemicals themselves. We all seemed to agree that this is at least a possibility, though not a reality for some, if not most people who experience romantic love.

Then, we raised the larger question: what merits inclusion in the DSM? We wondered if the inclusion of something in the DSM, as a “disorder,” isn’t arbitrary. What is the gold standard, the “average” person against which all of these disorders are weighed? We pointed to the fact that many disorders have the potential to be constructive (such as ADD properly managed), and that some have been removed (homosexuality used to be classified in the DSM). How is it that we know a certain facet of human behavior is a dis-order? We also pointed to the problematic nature of diagnosis itself. Someone might exhibit the symptoms of a disorder, but yet not have that disorder. It seems that in some cases, it’s the assumption that the person has the disorder that leads to the later diagnosis–confirmation bias. Sara told the story of a man in prison exhibiting the signs of a psychopath who could not escape diagnosis.

This was a particularly rich discussion, and I’m missing some of the details. We left off with some suggestions of potential topics for next time: substance dualism, and the everlasting problem of morality and ethics, which is almost as big a problem as that there were still no snacks.