MacIntyre on Protest

Protest is now almost entirely that negative phenomenon which characteristically occurs as a reaction to the alleged invasion of someone’s rights in the name of someone else’s utility. The self-assertive shrillness of protest arises because the facts of incommensurability ensure that protestors can never win an argument; the indignant self-righteousness of protest arises because the facts of incommensurability ensure equally that the protestors can never lose an argument either. Hence the utterance of protest is characteristically addressed to those who already share the protestors’ premises. The effects of incommensurability ensure that protestors rarely have anyone else to talk to but themselves. This is not to say that protest cannot be effective; it is to say that it cannot be rationally effective and that its dominant modes of expression give evidence of a certain perhaps unconscious awareness of this.

From After Virtue, by Alasdair MacIntyre

https://epistemh.pbworks.com/f/4.+Macintyre.pdf

Advertisements

Action and Intent

Why should one be generous?

That was the question we started with today. One of us had run across an argument that ran something like this: one should be generous because it gives you a self-satisfaction that nothing else can give you. But then, the question is, isn’t that then not really generosity but selfishness? And what happens after an act of generosity that is ineffective or poorly received? (“They totally didn’t appreciate what I did for them–my generous effort was wasted!”)

So, if the reason behind being generous can’t be only because it makes you feel good (although, happily, that is often a side-effect), why be generous? Why is generosity good?

As is the case with all of the ethical questions we discuss, we reached an impasse, but having discussed ethics so much, we reached it rather quickly. Either generosity is, of itself, inherently good (and there is no way to prove that), or it is good because it is dependent upon something else–a chain of reasoning, or a dictum from God.

We discussed the problems with morality from “God says so.” While this is actually a way to solve the ethics problem, people bristle at it. It’s a conversation-stopper, and it seems overly simple. Shouldn’t there be a chain of apparent reasoning in addition, if in fact morality comes from God?

There’s much to say on this issue, but our conversation instead turned to the analysis of moral actions themselves: specifically centering around the question of action vs. intent. Can someone commit an evil action with a good intent?

For the sake of argument, we all agreed that Hitler’s actions of genocide were evil. But his intent was to promote human flourishing. At first, it appears that it is possible to have a good intention but to go about it in an evil way that actually prevents it from being achieved.

But, I argued, Hitler’s vision of human flourishing is quite different from ours–one that involves racial purity. So, it’s not enough to say that Hitler’s goal was “human flourishing” and dismiss it at that. We have a different conception of human flourishing that does not involve racial purity, therefore we would never have need of genocide in going about achieving it. The goal, at least in part, dictates the action.

Thus, it may be that since the action is dependent in no small way on the intent, the separation of the two is artificial. The intent is embedded in the action. Again: is it possible to commit an evil act with a good intention, or does the evil present in the act indicate evil intent?

We explored this a little bit with the example of Robin Hood. Robin Hood’s crime of robbery might be evil, committed for the greater good of redistributing wealth and caring for the poor. But, would it not be possible to say that Robin Hood’s act of robbery, while a crime under law, was actually the breaking of an unjust law (which stipulated that the wealthy should maintain their wealth while the poor should suffer), and therefore actually a just/good action?

What if Robin Hood killed someone during his act of robbery? Would that still be achieving a good end through evil means? Well, partly this depends on the circumstances of the killing, but we concluded that the act of killing is separate from the act of robbery, and therefore its intent could (should?) be considered separately. If Robin Hood kills someone out of fear of being killed himself–a preemptive, defensive killing–then he is doing so with the intent of preserving his mission, supported by an unjustifiable prediction of the future: how does he know that no one will continue his mission if he is killed? However, if his intention is different–to commit a virtuous robbery without harming anyone unless necessary (whatever that might mean)–the killing will not happen. The intent dictates the action.

Even in the case of the US capture of Osama Bin Laden, the civilian casualties indicate something about the goal. The vision included the potential for civilian casualties. If the intent were for a more righteous capture of Osama Bin Laden, the actions would not have been the same–we might have waited for a more opportune time, or gone about it in a less destructive (although perhaps potentially less effective) manner.

We concluded that actions, being highly dependent upon intent, actually include intent as an embedded aspect. The two are inseparable.

Our conversation drifted to various topics: transgenderism, feminism, gender, the idea of rights in general. An interesting question was, if gender is only a societal construct, can there be any justification for gender reassignment surgery on the basis of someone “knowing” they are a certain way?

The arguments for certain of these positions seemed pretty poor, and it appears that many people choose their positions based on their feelings rather than any sort of rational or critical thought process. I recalled a passage from Alasdair MacIntyre about how the shrill “protest” that we see in marches and rallies comes from the subconscious knowledge of the protesters that they can’t actually win their case in rational argument, so they must put on a display of emotion as a means of persuasion instead.
After having discussed many hot-button issues calmly and in the space of about ten minutes, we lamented how easily people are swept up emotionally and hang on to certain positions even though they are clearly not tenable. An acquaintance of mine told me recently that he decides his beliefs based on interpreting the facts of his life objectively–but what manner of interpretation is objective? (Doesn’t objectivity rule out any sort of interpretation by definition?) And what constitutes “facts,” especially when it comes to one’s own life or experience? After a certain point, knowing about philosophy is, in a way, like knowing that the emperor has no clothes. We get in arguments with people forgetting that they still think the emperor has clothes on. It’s no wonder these arguments are frustrating and go nowhere–many people are not aware of the philosophical assumptions they hold, or if they are, they take them as “givens.”

Preservation of the unfit, survivalism, and conforming to nonconformity

It’s been a while since Philosophy Club convened, and even longer since I’ve kept track of the minutes.

We began throwing out some different topics: Aristotle’s idea of the highest good (is it called eudaimonia?) was the first, by way of moral relativism and my devastating critique of it. (There is still some discontent regarding that post, but the relativists are in retreat for now.)

Serious discussion began around this hypothetical question: If human beings were created/grown for the purpose of transplanting the consciousness of the elderly (in order to allow people to live longer), would that be ethical? The human beings grown would be fully-formed human beings, with consciousnesses of their own.

It seemed to us at first that it would be unethical, since the transplantation of consciousness would erase the “original” consciousness of the body. Even if the bodies are grown for this purpose, they are still human beings. We noted that a human being is more than the sum of his parts–growing an ear on a lab rat and then removing it is of little consequence, but growing an entire human being and then erasing his consciousness (in effect, killing him, although not bodily) is a different matter.

Then we discussed the euthanasia and assisted suicide question. At what point is someone’s suffering “enough” to merit his decision to kill himself, or the doctors’ decision to kill him if he is not conscious? Is it subjective? Some people find reason to complain even though they live in luxury. “My internet connection is slow!” Suffering? Whereas, some people whose situations and circumstances would seem to indicate great suffering can find contentment even so. A person can feel suicidal, like life is not worth living, at the breakup of a relationship–but someone who has lost loved ones, or worse, can still think that life is a blessing.

In the case of euthanasia, we observed that the predictive powers of doctors is limited. No matter how low the chances of recovery, there is always the chance. Does this chance for recovery not justify keeping the person alive? Euthanasia is, paradoxically, an act of both arrogance and hopelessness.

The next question was asked in the context of Darwinism and survival of the fittest. In providing cesarean sections for pregnant women who are having difficulty in labor, and providing life support to infants who need it, are we preserving people unfit for survival? The answer seems to be yes. If “nature” were left to its own course, many people now living would have died during childbirth (mothers and babies), and others would not have been born (the mothers having died previously in childbirth). What are the potential consequences of this? Are we breeding a race of people radically unfit for survival in this universe? Will we eventually turn into a race of weaklings who will need more and more forms of life support as we preserve those who are unable to survive on their own, and always have been since birth?

Someone postulated that ethics comes from man’s instinct to survive. Survival is the ultimate goal of all ethics–right and wrong is decided by what will preserve the species best. I objected to this with a few different ethical scenarios.

The first is, of course, eugenics and genocide. If we want the race to survive, shouldn’t we exterminate people who are less fit, who would weaken the human race and make it less likely or able to survive–or at least work to breed people who are stronger and more fit? Many people would say that both of these projects are ethically wrong. That it is even a question seems to indicate that ethics is not merely about survival.

Second, I posed the example of a woman who is infertile. Somehow, we know that she is infertile (perhaps she has had her ovaries removed). At this point, she is simply consuming resources. Should she not be exterminated, to preserve the resources for people who are actually aiding the survival of the race (by reproduction)? There were a couple of objections to this. First of all, people who are infertile can still make contributions to society. But then we run into the problem of deciding what constitutes a contribution. What, beyond existence, constitutes a contribution?

Third, I asked, what about acts of sacrifice? A man standing by a river, and seeing a drowning child, should not jump in to save the child if survival is the ultimate priority. It would not even be a question. Refraining from attempting to save the child would guarantee his own survival of the situation 100%, whereas jumping in would significantly lower his chance of survival, even if it might raise the child’s chance of survival a little bit. Further, a child stupid enough to fall in the river should be allowed to perish and not reproduce, since clearly this child is less fit than children who do not fall in rivers. This clearly runs against the vast majority of morality throughout human history, which values acts of sacrifice and courage. Survivalism values cowardice.

None of this says that survivalism isn’t correct as an ethical system, but it seems to dismantle the idea that as human beings we got ethics from our instinct to survive.

We discussed conformity and nonconformity. What exactly is non-conformity; is it even possible? If you are a nonconformist, aren’t you really conforming to a certain idea about what nonconformity is? At the very least, nonconformity indicates that there is, at the outset, something to conform to, and there may be a wide range of possibilities, but there is still a limit to them. There is a framework to conform to: at the very least it excludes whatever falls under “conformity.” I mentioned Thoreau’s apparent difficulty with holding nonconformity as an ultimate ideal–after two years at Walden pond, he felt frustrated at having conformed to the path he had laid out for himself. But what did he think was going to happen?! With every choice we make, we only choose what it is that we are going to conform to, or how. There is really no such thing as pure nonconformity. This might also be why Nietzsche had a hard time developing a “new ethics”–he was vehemently anti-Christian, and wrote that the Ubermensch would be the man free from all morality. But at the very least, wouldn’t this mean acting contrary to Christian values, and wouldn’t this be limiting?

I posited that all “nonconformity” is actually to conformity to something outside of what’s been called “conformity” in the situation. For example, an activist might refuse to conform to an unjust law–but only because he is conforming to a higher ethical framework, one that declares that the law is unjust.

Lastly, we discussed argument itself, and how too often people engage in argument to win, not to learn (ourselves included). I mentioned Rogerian argument as an ideal alternative, in which one argues by convincing the opponent that it is in her benefit to adopt his position, necessitating that he begins by finding common ground, a place of solidarity. https://writing.colostate.edu/guides/teaching/co300man/com5e1.cfm

An Examination of Moral Relativism (and Nihilism)

I recently did some reading and listening on the subject of moral relativism (and relativism in general) and would like to record some of what I came across for future reference, possibly for discussion with the group. I’ve formulated these criticisms informed by my own beliefs, and also in light of what I’ve heard and read that confirms my view on the theory, that relativism is not practically workable, running against the very nature of human thought, and that the arguments in favor of it are deeply flawed.

First, let’s define the terms. General relativism (or so I’ll call it) makes the claim “all truths are relative” although it omits its qualifier, in which case it needs to read “all truths are relative (except this one)” making an exclusive claim to truth while denying all others in contradiction. This commits the fallacy of conflicting conditions. It is internally contradictory, like saying “this statement is false.” Relativism as a general concept is by and large considered untenable.

It is possible to maintain moral relativism, but not general relativism; it is possible to say “right and wrong are relative” as an absolute statement. In other words, when someone makes an ethical or moral statement such as “killing the innocent is wrong,” either that statement is only correct within the theoretical framework in which it is uttered (the society’s particular moral code, etc.), or it is actually an expression of the person’s belief/preference/feeling: “I don’t like killing” or “I don’t want you to kill” (closer to moral nihilism–the belief that “right and wrong” do not exist in any sense). The strength of this position is that it does not fall into the trap of total relativism. But what are its problems?

To begin with, let’s look at the reason people believe this theory in the first place. The most common argument for moral relativism is the argument from moral diversity. This argument observes, rightly, that at different times and in different cultures, people have different moral values, and then concludes that morality must therefore be relative. Some critics observe that morality is actually not different enough to justify the extreme claims of moral relativism (C.S. Lewis refers to the common morality shared among all cultures at all times the tao, in his book The Abolition of Man.) But regardless of the objections, this argument fails because it makes a category error by exchanging the descriptive for the normative. Observed moral relativism is an “is” claim (descriptive), while morality deals with “ought” claims (normative)–the way things ought to be, not the way they are. It does not necessarily follow from what is what ought to be. Similar to this argument would be to say: many people solve a mathematical equation in many different ways, getting different answers–therefore, there must be no correct answer, and math itself must be relative! Clearly, this is not necessarily so. And unless somehow proven otherwise, it remains a possibility that some people have their morality wrong or less correct, while some people have their morality right or more correct–a theory which accords with the phenomenon of moral diversity just as well as relativism. Furthermore, if we are to follow the reasoning of moving from descriptive to normative, then we must draw the same conclusion that science is relative, as is morality, since scientific knowledge and practice has differed so widely across time and among cultures–more so than morality, if you think about it. Yet we don’t hold this view; we believe that the science we have is the right science, or at least the most fully developed, the most correct that we’ve been able to come thus far. We are Hegelians when it comes to science, but postmoderns when it comes to ethics.

It is also curious that moral language began with and developed using absolute terms. Wouldn’t we only have developed the language of personal preference if that’s all that ethics was? Of course, this doesn’t prove that moral relativism is false, only that it seems strange, or unlikely.

But to go further, I would argue that moral relativism breaks its own rules in the same way that general relativism does. Moral relativism (and nihilism), in going beyond the descriptive, denote a truth about ethical statements. Remember, these positions argue that moral statements such as “killing is wrong” are either only true within their theoretical framework (relativism), or actually expressions of personal preference with no “truth” value, societal or otherwise (nihilism). To make this work, they maintain that meta-ethical statements, which they are, are true, but that ethical statements themselves are relative or not meaningful at all. In other words, we can talk in concrete terms about morality, but moral statements themselves are relative. But this raises a couple of questions: first, how do we know that statements of ethics are preferential, but that those of meta-ethics are truthful? How is it that statements about morality actually correspond to reality, whereas statements of morality only correspond to a theoretical framework or personal preference? I’m not sure that there is a coherent reason for drawing the Line of Relativism at ethical statements, but not at meta-ethical statements, or scientific statements, or any other sorts of statements–other than by appealing to some other philosophical position (humanism, or atheism, perhaps?). If you draw the line too far forward (more than just morality is relative), then you tend toward general relativism, and you are in danger of a self-refuting system. If you draw the line too far back (there are absolute truths), then you are no longer really a relativist, and the burden falls on you to show why it is that morality is relative and other things are not. Second question: how and why do we distinguish between the ethical and the meta-ethical in the first place? How are statements about morality and of morality any different to begin with? I would argue that this distinction is not justified, and that in fact there is cause to believe the opposite–that statements about morality and of morality are actually of the same kind. To say that morality is relative is to make the statement that there are no absolutely necessary moral obligations, which is to say, that that is the morality (or lack thereof). Therefore, saying that morality is relative or does not exist is to make a moral claim that asserts that it exists and is not relative–saying, essentially, and in self-defeating contradiction, “all absolute moral claims are false (except this one).” Or maybe even better, “everything about morality is relative (except that it is relative).”

The support seems to me weak at best, deeply flawed at worst, and the theories themselves may contain irresolvable contradictions. Now, let’s examine the application of the theories. What are potential problems with moral relativism’s and nihilism’s outworkings, supposing them to be true?

First, a lot of people contradictorily extract a concrete ethics out of the view that ethics are relative. People say, “right and wrong are relative, therefore we should be tolerant” not realizing that they’ve just made “tolerance” an absolute virtue–making right and wrong not totally relative (and also making the same category error of moving from descriptive to normative). This is one of the key problems I see with humanism in its contemporary form.

(There are similar problems in arguments against religion. Some people argue that no religion can claim to have the whole truth, that it is presumptuous or arrogant to claim a perspective of superiority over others. But isn’t this argument itself a superior claim about religious truth of the same sort–indicating that its speaker does, in fact, have the perspective that supposedly can’t be had or shouldn’t be claimed?)

But if one manages to avoid this trap, there are others that reveal a deep incoherence in the theory. Some moral relativists say, “I have my ideas of what is right and wrong; others do too, and they are welcome to it.” But the problem is that ideas about right and wrong necessarily extend beyond the self. “I think that people ought not to murder, but it’s all relative”–yet to say that “people ought not” implies a standard that applies to more than yourself, indicating externality, and thus a nature concrete, not relative. (Or else, how is it that your personal standard can meaningfully apply to other people?)

One may avoid any sense of corporate, external morality by saying only “I ought not to murder” or “it would be wrong for me to murder” but then this still suggests at least two working pieces: a code, framework, or set of guidelines, and the self upon which these guidelines are imposed. This code can only come from either inside or outside of the self. If guidelines are imposed by the self, then the question is: how do we come up with these “oughts” and how do they meaningfully guide behavior? Are there two selves, or else how are one’s “oughts” different from moment-to-moment impulses, or different from what one actually ends up doing? You’re making up your own “oughts.” You can only say “I ought to do this thing” because you mean that you want to, or you’ve decided you will do it, and then you do it. But then you are making a distinction where none exists only to satisfy the “ought” language–and justifying your decisions and actions using circular reasoning, when there is not even a need to justify those actions in the first place!

But this is contradictory and incoherent, and no one thinks this way. We all think and act in opposition to relativism–in terms of two working pieces, ourselves and something else concrete. For example, a relativist might say that our choices are relative to what consequences we find desirable or undesirable, but how do we decide what is desirable or undesirable? Even if what you find desirable or undesirable is relative for you, you treat it as though it were concrete. You make the decision for what would be desirable on the assumption that it would be desirable no matter what you choose. What decision you make depends on what is desirable, not the other way around. If you should unexpectedly obtain an undesirable result instead of a desirable one, or discover that what you thought would be desirable is really undesirable, you then think that you had made the wrong decision–not that your appraisal of the result needs to change. This all implies permanence and externality of that desirability or undesirability, because it does not change depending on you. In resolving not to make the same mistake again, or in some other way to integrate the experience into your decision-making process, you admit that your judgment is subject to error, and the only way for this to be possible is if what you judged is external to your faculties of judgment. This is contrary to the principles of relativism. What you don’t do–and no one ever does–but what relativism demands, is to change the interpretation of the result based on the decision: you don’t say “I’m sure glad I made that decision that got this undesirable result; I’ve learned that this result is actually desirable because it’s what my decision led to!” (“I’m glad I decided buy that car on a whim from the salesman; now I know that getting cheated and losing money is good!”) Again, no one does this, but this is actually what relativism dictates should happen, because if desirability or undesirability is relative, then we have no justification for interpreting the results of our choices apart from the choices themselves–and for our choices, we have nothing other than the fact that we made them. If we thought it was right at the time, then it was right!

So moral relativism entirely undermines the idea of self-improvement, since there is no way to know or say that we did something wrong, and that we “ought” to be “better” if whatever we did is what we ought to have done. The selection can be nothing other than arbitrary or capricious. This is even more problematic on a societal scale: how are we to improve society without pointing to certain ethical codes as better or worse than others? For instance, on what basis did our society depart from its endorsement of slavery other than to say that we had had it wrong the whole time, not that we had simply, arbitrarily changed our feelings about it now? Maybe society exchanges one moral framework for another–but on what basis? Doesn’t it have to appeal to a higher, more permanent framework to justify the exchange? On the other hand, if relativism is true, the only true appeal to a certain ethics is through force. A leader can’t say “murder is wrong” meaningfully to his citizens except as a lie, justifying his convictions with ethos beyond his own; he can only truly say “you better not murder or I’ll make sure something bad happens to you” (assuming that “something bad” is something actually bad, and not just relative…)

If true ideas about right and wrong don’t exist, then what is the meaning of having ideas about right and wrong in the first place? It’s like having very strong opinions about who Santa Claus is, what he is like, but without believing that he actually exists. You might believe that Santa Claus has pointy ears–you may believe this very strongly–but then, what is the point of this, if Santa Claus isn’t real? Why do you bother with this belief in the first place?

If you are a true relativist, I would argue, you are actually a closet nihilist, and you must give up your personal beliefs of right and wrong altogether in order to be consistent. But does this work? This leads me to my next point, which is that the difference between belief in Santa Claus and belief in morality is of course that one is an integral part of the human experience and the other is not. So I’m going to apply this analogy even where it breaks down. If we discover that arguing about whether Santa has pointy ears or not is meaningless because he doesn’t actually exist, that’s fine–we can stop worrying about whether or not he has pointy ears altogether–it doesn’t affect our lives. But if we decide that morality doesn’t exist, it still affects our lives; there is no escape from it. Unlike choosing to disbelieve in Santa, deciding whether morality exists is more like deciding to breathe air or not–it has consequences either way. Keep breathing air: you live. Stop breathing air: you die. Everyone makes choices. What one believes about the nature of morality influences the decisions one makes. Therefore, beliefs about nature and morality influence everyone. Whichever one you choose, it matters. There is no way for it not to matter.

The practical result of moral nihilism–to dismiss the idea of the “ought” altogether–is to live without a framework for making decisions at all, conforming to any old whim that strikes us, living reflexively. But this is not possible. If we decide to live reflexively, to be in accordance with our nihilistic beliefs, then we have constructed a moral framework (we “ought” to live reflexively), and we are not a nihilist.

Even if, theoretically, we could avoid this, moral nihilism also has the problem of relativism’s self-justification, but with a twist: if there is no right or wrong, there is no justification for analyzing the result of a choice, although under nihilism one cannot even evaluate the result of a choice by virtue of the choice itself.

And most importantly, moral nihilism begins to undermine reason itself. Reasoning is a series of “oughts”–rules of logic and inference. This “ought” to follow from that. With no “oughts,” there is no reason. If we object to this, saying that moral “oughts” don’t exist but “oughts” of reason and logic do, then we fall into the same error as the relativist: drawing a Line of Nihilism arbitrarily between the questionable distinction between ethics and meta-ethics. What is the difference between “oughts” of morality and “oughts” of logic? And we contradict ourselves in saying “there are no moral truths (except this one).” So I don’t think it’s a slippery slope to say that not only are moral relativism and nihilism self-contradictory and contrary to the human experience, but they threaten to (and by necessity must) undermine much more than just morality.

Thus, the theories (moral relativism and moral nihilism) seem highly unlikely, especially considering that there are other competing moral frameworks more congruous to human experience, thought, and life (i.e., Aristotle’s telos, divine purpose/Christian virtue, human rights theory, etc.). Nonetheless, relativism and nihilism remain attractive positions. The theories seem to arise from the sort of naturalistic atheism which, although subject to much criticism itself, is by and large the norm in our cultural place and time. They appeal to people who disbelieve in God, the supernatural, and the metaphysical, and who hold scientistic beliefs that dismiss the idea of anything “innate” in humanity other than biological processes (and to people who want to do whatever they want to do without having to think deeply and seriously, and who want to avoid the pain of moral correction). But moral relativism and nihilism have serious problems. Their philosophical justification is questionable at best and they are incongruous with how we as humans think and live.

The Death Penalty: Is There a Dispensable Mob Boss?

A small group, only three of us. We had no planned agenda. There was a question about Aristotle’s four causes, but I wanted to put off trying to talk about them until I had done some more research. So instead, we decided to discuss some ethical issue. On a whim, I suggested the death penalty. More specifically, I asked the question: is it ethical for a society to put to death one of its members, and if so, when?

To figure this out, we began by examining some of the common justifications for the death penalty. Turns out, the death penalty is cheaper than keeping someone alive for life in prison. However, the cheapest option is not always the “right” option. Lower cost does not justify something ethically.

We moved on to the use of the death penalty as a punishment. Is the death penalty a proper punishment for murder, and is it justified to carry out such punishment? (I separate these two questions because even if the death penalty is an appropriate punishment for murder, that does not necessarily justify a society to enact it.) “Punishment” seemed to suggest some different interpretations. One potential definition is that “punishment” is something that makes someone feel bad for their actions. Obviously, since the death penalty takes away one’s ability to feel entirely, it doesn’t really function to inspire guilt or create bad feelings for someone who has committed the action (other than, perhaps, their anguish in prison as they await the carrying out of their sentence–more like a torture technique).

Alternatively, it might be justified by its function as a deterrent: the threat of the impending punishment might be such that it causes an agent to alter his or her course of action. This had some issues. The strength of this particular threat, and thus its effectiveness as a deterrent, appears to come down to one’s personal beliefs about death. The more afraid one is of death, the more effective the threat of death should be as a deterrent. If you are afraid of going to hell, then you will not want to be sent there. However, if you believe that you will go to heaven, you might be less likely to fear death. Similarly, if you are an atheist and you view death as merely the cessation of biological activity, then you would have a lesser fear of death than if you were afraid you might go to hell. If you are a radical Islamist and you view the death penalty as a death in battle, or for a cause, you might even welcome it. Thus, its effectiveness as a deterrent is probably limited by a population’s beliefs about what happens after death.

The jury is still out on whether or not societies with the death penalty are statistically less likely to have murders. But even if the death penalty were shown to be an effective deterrent, would that necessarily justify it? More on that later…

Another potential function is as a settling of a score, a payment of a debt, a restoration of the balance of justice. But this also seemed inadequate. If I am a thief, and I steal money from someone, I can repay the debt. But if I am a murderer and I take someone’s life, I cannot give that life back. Furthermore, the taking of my life by capital punishment does not bring the scales back into balance, it simply adds to the body count. Capital punishment doesn’t seem to settle a score in the same way as other punishments or acts of retribution. (But even these have their problems. If I steal Steven’s car, I can return his car or pay him a sum of money equivalent to it–but if I steal Steven’s car on a day when Steven has a very important job interview, making him unable to go, I have not only robbed him of his car, but I have also robbed him of the opportunity to get a very important job. And there is no way to repay the debt that I owe him on that score, even if I return the car, or two cars, or three.)

Some would argue instead that it provides consolation for the victims, even if it is not a proper restoration of the balance of justice. Well, it might, but that isn’t necessarily an ethical justification. Just because something provides comfort or consolation, even if it did function as an effective restoration of the scales of justice, or as a deterrent, doesn’t mean it’s right. A common justification is “an eye for an eye”–you take a life, you lose your own. But we don’t do that for other crimes. We kill killers, but… we don’t rob robbers. We don’t rape rapists. How is it that we can say: killing is wrong for a citizen but okay for the government, while rape, for instance, is wrong for both the citizen and the government? Where does the double standard start and where does it end?

Human beings have a strong sense of justice, and the desire to see it done. Punishment can satisfy that desire. We watch TV programs like Game of Thrones and get wrapped up hating certain despicable characters, and then we feel catharsis when they are punished for their misdeeds. We all have a desire to see justice, almost to the point of bloodlust. But there is a problem. Steven used the example of a villain in Bad Boys II–an otherwise “disposable mob boss” who elicits no sympathy, but who has a daughter. When the villain is defeated, or killed (I don’t know, it’s been a long time since I’ve watched Bad Boys II), there is some lingering unease–what happens to the daughter? This raises the question if in real life there are any purely disposable people, like the deplorable characters in Game of Thrones, or if people always have mitigating factors that, if known, would make their death rest less easily with us, like the not-quite-disposable mob boss from Bad Boys II. Does everyone points of sympathy? As such, is it for us to say that they deserve death? We know that most murderers and likewise messed-up people usually have tortured pasts and histories, things that can make us sympathetic to their plight. At what point do we draw the line, and for what crimes, saying: we feel sorry for you, but not sorry enough that we won’t kill you–?

But this raises the broader question of to what extent anyone is responsible for their actions in general. We said that murderers and otherwise messed-up people almost always have some history of trauma, and this could potentially mitigate their responsibility for their actions in some way. But if that’s true, couldn’t it be argued for all of us? Aren’t we all, in not insignificant ways, the products of our upbringings and our backgrounds? And at what point do we stop saying “well, that’s just the way I was brought up” and start saying “yes, that was my fault”? Here it appears that we must either feel sorry for everyone or feel sorry for no one–unless we can identify a point at which someone is or is not responsible for certain actions.

This poses another problem though, because if someone’s action is often out of a person’s control, even more so is the result. For example, we discussed how murder and attempted murder are judged differently–but the problem is, the action was just the same, and the only reason that it turned out differently was likely due to some factor outside of the agent’s control: his lack of experience as a murderer, his clumsiness, accidents that conspired to keep his victim alive. We see fit to judge people on the results of their actions, which are just as much the results of accident as much as (or even more than) their intentions. (For more on this issue, see Thomas Nagel’s “Moral Luck”) Why do we judge someone differently for something that wasn’t their fault?

But the prospect of no longer saying that people are responsible for their actions does not seem like a viable option. (We all know people who shirk responsibility.) So perhaps we should judge someone by his intent, not by his actions. However, this is incredibly problematic for any serious system of justice in society. How do we know someone’s interior thoughts and intentions? “I didn’t mean to speed…”

We discussed negligence: when is not doing something doing something wrong? Here again, there seems to be a double standard. The law seems to indicate that some acts of neglect are reprehensible, while others are not. Neglecting to feed your child is illegal. Neglecting to intervene in a fight (which may lead to a murder) is not.

Somewhere in here, we also discussed the off-chance that a verdict is wrong. Is it excusable for a government to take the life of a citizen if there is a chance, no matter how slight, that its verdict was in error?

This was as far as we got. There is more to say on this issue, but so far we could not find an acceptable justification for the killing of another human being by a government unless it was preceded by a divine command–God ordering that the person should be killed. So many ethical issues seem to run afoul of the “moral luck” problem. And this I believe is where Aristotle’s ethics project led him, although instead of “moral luck” he might call it fate, or tragedy: no matter how virtuous a person’s character, intent, and actions, goodness may still be thwarted by factors outside of the control of human beings.

Next week: the four causes, and is institutional education social engineering?

Flash Apologetics in Mary’s Room

People trickled in slowly today. As such, conversation began with fluidity and not much intentionality. We briefly touched upon the ills of modern technology, the legalization of all drugs, the influence of Puritanism on American society and government, and the contradictions of organized anarchy. All these are fruitful topics, and perhaps they may resurface later.

The real discussion began with Steven introducing the Mary’s Room thought experiment: Mary is a girl who has studied the subject of color, and knows all there is to know about color. However, she has lived her whole life in a black and white room, can’t see herself, reads black and white books, and has a black and white computer screen, and therefore has never personally witnessed any color. Suddenly, one day, her computer screen malfunctions and displays a red apple. The question is: has she learned anything new?

The question seems to turn on epistemology. Is the experience of something different from the thing in itself, and is it distinct from other forms of knowledge, and does it count as “knowledge” to begin with?

Some of us said yes, and some of us said no. It seemed to me that to have something described to you, and then to actually experience that thing, are different. It also seems to me that experience could be classified as a form of knowledge, however subjective it may be. Mary now knows what it is like (for her) to see red.

This led back into our discussion about the forms. The form “red” exists separate from the objects that display it, but it is via those objects that we come to know the form through our senses. As in, when we look at a chair, we understand more about the chair than merely what we see–we have a sense of its structure, etc., we understand parts of the chair, we fill in the blanks of what we cannot see, which is ultimately the totality of our perception of it. Our perception is not limited only to the sensory data that we receive. The same is true of the forms. We cannot witness the forms in and of themselves, but we can come to know them through our perception of them as they are expressed by material objects.

As mentioned before, this flies in the face of the post-Hume empiricism which we’ve all more or less been indoctrinated with culturally. To my knowledge, Hume didn’t have solid “reasons” as such for why he denied transcendence (such as the forms, mind-body dualism), cause and effect, etc.– he just didn’t accept them.

In the midst of this discussion, I “came out” as a Christian, and Aramis pressed me on what the Christian perspective on the death penalty might be. I argued that an “eye for an eye” killing of a killer might be null under the covenant of grace. Additionally, ending a person’s life cuts short the potential, however slight, that they might repent and be redeemed.

This led naturally, if uncomfortably, into a discussion of the truth of Christianity in general, focusing mostly on the existence of God. This makes sense because, of course, if God can be disproved, Christianity crumbles. But the problem is, determining what constitutes “proof” or “evidence” for God is already to begin making statements about who or what God is in the first place. (At this point, I went about explaining what I understand to be the “presuppositional” apologetic. As before, I’m going to try to distill the argument, rather than trace its many twists and turns as they occurred in conversation.)

We all believe many sorts of different things, many of which are justified without the use of what we commonly refer to as “proof” or “evidence.” (And here I borrowed a little bit from Alvin Plantinga’s Warranted Christian Belief.) A scientific belief, such as that a certain substance changes from liquid to gas at a certain temperature at a certain air pressure, is one based on evidence. This belief depends on its repeatability, the soundness of the principle of induction, and rests on the assumptions of constancy in the universe and the existence of material causes. Another sort of belief, such as what you had for breakfast this morning, is based not on evidence, but on memory and perception. You don’t open your stomach and examine its contents or return home and examine the remnants on your unwashed plate–you just know, and what’s more, the knowledge is valid. It is a “properly basic” belief. Another sort of belief, for instance, is that the ad hominem fallacy is an invalid form of argument. How might that be proved true? Well, that’s difficult to do, because the classification of the ad hominem as a fallacy is part of the system that we use to determine truth in the first place!

So, there are some types of beliefs that could be said not to require “proof” or “evidence” in the conventional sense, but that are still nonetheless justified. So, saying that the belief in God requires a certain proof is to classify the belief in God into a certain category of beliefs–more along the lines of scientific knowledge. But is this categorization justified? Well, the only way to say so is to say something about God–if belief requires such-and-such, it can only do so on account of something about the nature of God himself. Therefore, since this belief determines what constitutes evidence and how you interpret that evidence, to say that you can’t believe in God because there is no evidence is really to say that you’ve already chosen not to believe in God at the outset.

If the beliefs in God and Christianity are more along the lines of those “properly basic” (as Plantinga argues at great length), not only would it be difficult to frame any sort of argument for them, but such an argument might not be justified. It would not be the case that belief in God could be validated by logical or scientific means, but rather that belief in logic and science are validated by the existence of God.

(As an aside: there are, of course, arguments for Christianity, most famously Aquinas’ proofs. But even those arguments turn on an Aristotelian understanding of metaphysics, something that is mostly in concord with the teachings of Christianity. Since Descartes, I think, Western philosophy no longer subscribes to that sort of metaphysic, which might be why many people find Aquinas’ proofs insufficient.)

There were also some other objections worth noting, including: why would God create some people only to send them to hell, why does he make people sick/sinful, why would he make creation with a plan for salvation that included the killing of his own son, etc. But I would consider these to be “internal” issues–they have to assume the existence of God at the outset in order to be meaningful. Furthermore, if you can’t think of a good reason for God’s doing something, that doesn’t necessarily mean that there isn’t one. Another question was, if humans are broken and sinful, doesn’t that mean they are worthless? The answer according to Christianity is no–the price paid by Jesus indicates the worth God finds in humanity. If something is broken, but can be restored, it doesn’t lose its worth. If I have a precious, rare vase that I break into a million pieces, it would be said to have lost its worth, but that’s because it can’t ever be restored to its original state. Christianity says that we can, through Jesus, be restored. The original worth can be recovered (and for the Christian, is in the process of being recovered by the sovereignty of God through sanctification).

The next question was, then, if belief is a choice, why should I choose to believe? My response here was sloppy. I tried to say something along these lines: Christianity would argue that its worldview encompasses and explains more than any other worldview. For example, a naturalistic worldview explains physical phenomenon very well, but it is at a loss to explain our sense of morality and justice. A humanistic worldview explains why morality is important, but it fails to provide an adequate response when humanity fails, or when suffering happens.

I used the example of Chesterton’s madman (from Orthodoxy). If Steven is a madman, and he believes that everyone is in a conspiracy against him, and I say to him, “We are not in a conspiracy against you,” Steven will interpret this as confirmation of his theory. “Of course you would deny it,” he will say. “You would never admit that you are in a conspiracy against me! That proves that you are, in fact, in a conspiracy against me!” But if Steven believed that everyone was not, in fact, in a conspiracy against him, this theory would explain the data as well: I am telling the truth. But unfortunately, there is no way to argue him out of it, and there is no way for him to reason himself out of the belief, since his reasoning is determined by the belief, and not the other way around.

All worldviews explain the world in closed, self-justifying systems, and therefore can’t be argued against per se, but some might be said to explain the world more completely than others. Why believe in Christianity? Well, why should the madman choose to stop believing that everyone is in a conspiracy against him?

There is, of course, more to say about why one might choose Christianity, but that was all we covered. The discussion ended there abruptly because we were over time. However, we pledged to continue it at the next meeting (as uncomfortable as I am playing amateur apologist). Perhaps we will look at some of the classical proofs for Christianity (i.e. Aquinas), or maybe we will pursue this presuppositional argument further and discuss the supposed benefits of Christianity as a worldview.

The Atomic Bomb, Artificial Intelligence, and the Forms

A small group today (only three of us), but a discussion no less lively than usual. We began talking about the atomic bomb in the context of an essay prompt for a history class. The question asks, was dropping the atomic bomb on Hiroshima and Nagasaki justified? The arguments in favor seem to rely on speculation– specifically, that, had the bomb not been dropped, the body count would have been higher in the event of a more traditional invasion/attack and/or the continuation of the war. Others rely on the assumption that the Japanese would not have responded to a demonstration or a test, and that direct action without prior warning was the only effective way to send the message. The problem with these arguments is that they are wildly probabilistic, and there is no way to really know what “might have happened” in an alternate history. As we discussed last week, these sorts of projections and predictions are seldom justifiable. The unfortunate fact, though, is that these sorts of decisions have to be made. The arguments against the dropping of the bomb appeal more to morality, absolute standards by which a country should or should not act during time of war. These, however, seem arbitrary. Who’s to say which lives are more valuable than others? (Are American lives more valuable than Japanese? Are civilian lives more important than soldiers’?) It is possible to say that warfare in general is immoral, and then anything done in the context of warfare is automatically immoral to begin with, so we can say nothing either way about the dropping of the atomic bomb.

As to possible approaches to a paper, I suggested perhaps beginning with a strong “if”–if a certain premise or set of premises is true, then certain conclusions will necessarily follow, in regards to morality, prediction, or whatever. But in the end, it all depends on our acceptance of the initial premise or set of premises. Or, one might begin by putting the burden of proof on a certain party, and then show how that party has failed to prove its standpoint.

We discussed the nature of debate and argument, noting that some people seem less interested in having a dialogue than merely having a forum to express their own opinion. We can all be violent, aggressive defenders of our own viewpoints, failing to respond reasonably to someone else’s argument. The problem, for me, arises when that person does not really present an argument, nor seem interested in listening to one. We must always remember to be civil with people, even if they disagree with us, and even if they fail to go about argument and debate in a way that we think they should.

From there, the conversation took a turn to talking about artificial intelligence. Apparently, Google has created AI that can hold conversations–conversations that are mostly nonsensical, but that sometimes stumble on something philosophical, such as what constitutes “art,” or if one of them is, in fact, a human. We discussed whether or not AI could be said to be “conscious,” which of course depends on what our definition of consciousness is. Is something conscious if it could be like something to be that thing? (See Nagel’s “What is it to be a bat”)

I posed the question: given the proper amount of time and technological advancement, could an artificial intelligence ever be said to constitute a human consciousness? This raised another question: could an artificial intelligence, given time, learn everything, and therefore know everything, that a human knows? What is it to learn something, and what is it to know something?

From here, the conversation took several convoluted turns, involving an illustration featuring an adopted child living alone in a room painted blue, introduced to another human being (me) for the first time at the age of fifteen. Obviously such experiments are impractical to actually perform, but even as a thought experiment it was difficult to wrap our minds around. How much would this child “know,” beyond what we might say is mere instinct? An AI can be taught to avoid damage to itself, but how does it know what might constitute damage, given the myriad things it could encounter? Might it mistakenly think that a firm handshake is portentous of damage? Whereas a human being already intuitively understands what sorts of things constitute damage, and what don’t (even if they involve injury).

I posed the argument that there are certain things that we “know” that are not “learned” from experience, and that are not also mere instinct. This is what I understand to be part of what Plato talks about when he talks about the forms. I’m not an expert in this, so I may be mistaken. And this argument was lengthy and met many objections, so I will summarize rather than try to trace the actual direction of the discussion. My illustration was that, for example, when we learn the ad hominem fallacy of argument, what we are learning is only the term–the concept is something that we already know. We had likely encountered ad hominem before, so we knew what it was conceptually, and we also knew that it was wrong. But even if we had never encountered it before, we would still know, on the first encounter, that it was wrong, just as we would recognize that 1 + 1 = 3 is wrong (arbitrary mathematical language and symbols aside). We did not have to learn that–in fact, this is part of the construal by which we learn new things: for example, if we are presented with an argument that contains the ad hominem, we know to question its conclusion, and therefore we choose what to “learn” about it, if anything. So in “learning” ad hominem, all we are actually learning is the term for it–like how we learn that the word “blue” denotes the color that we experience as blue. Even if we are taught that the color blue is actually called “orange,” or that the ad hominem is actually called the “gobbledygook,” the knowledge of the thing in itself is something that we already have. So in terms of Plato’s forms, the ad hominem is something we accept as true because we already have an idea of what truth looks like, or is. It fits the “form” of truth, as we understand it, and in order to recognize that, we did not have to learn anything. We already knew it. So this theory of knowledge states that much of what we “learn” is actually just “recovered.”

This all comes into conflict with the notion that we, as human beings, are all just a bundle of organic matter wired with a set of instincts. We could call our sense of forms a kind of “instinct” but that seems to run up against the definition of “instinct” as it is commonly understood, and seems an inadequate explanation. It also runs up against common models of knowledge: we don’t know something, then we learn it, then we know it. It seems like there are certain types of knowledge–such as that water is composed of hydrogen and oxygen–that conform to these models, but that there are also types of knowledge (or perhaps better called “beliefs”?) that do not–such as that the ad hominem presents a fallacious argument, an unacceptable means for uncovering truth.

The final word on this, other than suggesting that AI might never be said to achieve a state comparable to human consciousness because it cannot have access to this sense of the “forms”, was that, given the fact that there are things that we can “know” that are not “learned,” at least empirically, might this have some bearing on the field of ethics?

Sara seemed distressed, but I think it was a good exercise in considering another worldview, one that possibly makes better sense of things. Sometimes Philosophy Club is unproductive, but this one felt immensely productive. Except for the fact that there were no snacks. And on that count, it was an utter failure.