Preservation of the unfit, survivalism, and conforming to nonconformity

It’s been a while since Philosophy Club convened, and even longer since I’ve kept track of the minutes.

We began throwing out some different topics: Aristotle’s idea of the highest good (is it called eudaimonia?) was the first, by way of moral relativism and my devastating critique of it. (There is still some discontent regarding that post, but the relativists are in retreat for now.)

Serious discussion began around this hypothetical question: If human beings were created/grown for the purpose of transplanting the consciousness of the elderly (in order to allow people to live longer), would that be ethical? The human beings grown would be fully-formed human beings, with consciousnesses of their own.

It seemed to us at first that it would be unethical, since the transplantation of consciousness would erase the “original” consciousness of the body. Even if the bodies are grown for this purpose, they are still human beings. We noted that a human being is more than the sum of his parts–growing an ear on a lab rat and then removing it is of little consequence, but growing an entire human being and then erasing his consciousness (in effect, killing him, although not bodily) is a different matter.

Then we discussed the euthanasia and assisted suicide question. At what point is someone’s suffering “enough” to merit his decision to kill himself, or the doctors’ decision to kill him if he is not conscious? Is it subjective? Some people find reason to complain even though they live in luxury. “My internet connection is slow!” Suffering? Whereas, some people whose situations and circumstances would seem to indicate great suffering can find contentment even so. A person can feel suicidal, like life is not worth living, at the breakup of a relationship–but someone who has lost loved ones, or worse, can still think that life is a blessing.

In the case of euthanasia, we observed that the predictive powers of doctors is limited. No matter how low the chances of recovery, there is always the chance. Does this chance for recovery not justify keeping the person alive? Euthanasia is, paradoxically, an act of both arrogance and hopelessness.

The next question was asked in the context of Darwinism and survival of the fittest. In providing cesarean sections for pregnant women who are having difficulty in labor, and providing life support to infants who need it, are we preserving people unfit for survival? The answer seems to be yes. If “nature” were left to its own course, many people now living would have died during childbirth (mothers and babies), and others would not have been born (the mothers having died previously in childbirth). What are the potential consequences of this? Are we breeding a race of people radically unfit for survival in this universe? Will we eventually turn into a race of weaklings who will need more and more forms of life support as we preserve those who are unable to survive on their own, and always have been since birth?

Someone postulated that ethics comes from man’s instinct to survive. Survival is the ultimate goal of all ethics–right and wrong is decided by what will preserve the species best. I objected to this with a few different ethical scenarios.

The first is, of course, eugenics and genocide. If we want the race to survive, shouldn’t we exterminate people who are less fit, who would weaken the human race and make it less likely or able to survive–or at least work to breed people who are stronger and more fit? Many people would say that both of these projects are ethically wrong. That it is even a question seems to indicate that ethics is not merely about survival.

Second, I posed the example of a woman who is infertile. Somehow, we know that she is infertile (perhaps she has had her ovaries removed). At this point, she is simply consuming resources. Should she not be exterminated, to preserve the resources for people who are actually aiding the survival of the race (by reproduction)? There were a couple of objections to this. First of all, people who are infertile can still make contributions to society. But then we run into the problem of deciding what constitutes a contribution. What, beyond existence, constitutes a contribution?

Third, I asked, what about acts of sacrifice? A man standing by a river, and seeing a drowning child, should not jump in to save the child if survival is the ultimate priority. It would not even be a question. Refraining from attempting to save the child would guarantee his own survival of the situation 100%, whereas jumping in would significantly lower his chance of survival, even if it might raise the child’s chance of survival a little bit. Further, a child stupid enough to fall in the river should be allowed to perish and not reproduce, since clearly this child is less fit than children who do not fall in rivers. This clearly runs against the vast majority of morality throughout human history, which values acts of sacrifice and courage. Survivalism values cowardice.

None of this says that survivalism isn’t correct as an ethical system, but it seems to dismantle the idea that as human beings we got ethics from our instinct to survive.

We discussed conformity and nonconformity. What exactly is non-conformity; is it even possible? If you are a nonconformist, aren’t you really conforming to a certain idea about what nonconformity is? At the very least, nonconformity indicates that there is, at the outset, something to conform to, and there may be a wide range of possibilities, but there is still a limit to them. There is a framework to conform to: at the very least it excludes whatever falls under “conformity.” I mentioned Thoreau’s apparent difficulty with holding nonconformity as an ultimate ideal–after two years at Walden pond, he felt frustrated at having conformed to the path he had laid out for himself. But what did he think was going to happen?! With every choice we make, we only choose what it is that we are going to conform to, or how. There is really no such thing as pure nonconformity. This might also be why Nietzsche had a hard time developing a “new ethics”–he was vehemently anti-Christian, and wrote that the Ubermensch would be the man free from all morality. But at the very least, wouldn’t this mean acting contrary to Christian values, and wouldn’t this be limiting?

I posited that all “nonconformity” is actually to conformity to something outside of what’s been called “conformity” in the situation. For example, an activist might refuse to conform to an unjust law–but only because he is conforming to a higher ethical framework, one that declares that the law is unjust.

Lastly, we discussed argument itself, and how too often people engage in argument to win, not to learn (ourselves included). I mentioned Rogerian argument as an ideal alternative, in which one argues by convincing the opponent that it is in her benefit to adopt his position, necessitating that he begins by finding common ground, a place of solidarity. https://writing.colostate.edu/guides/teaching/co300man/com5e1.cfm

Advertisements

Addicted to Love (and Apologetics Continued)

We picked up essentially where we left off, launching right into the arguments for the existence of God. Steven had done some outside research, looking into Aquinas’ arguments. The one he found most compelling was the first cause argument: that everything has a cause, but there must have been something at first without a cause. This led into a discussion of Aristotle’s four causes, since many of these arguments rest on Aristotelian metaphysics. Aristotle’s efficient cause is what we normally think of to be “cause” in our contemporary understanding, but there were three others that are also used to frame arguments for God. Our understanding of these arguments is necessarily impoverished because we don’t subscribe to these metaphysics. I will post some resources on these later, since our discussion mostly consisted of me trying (badly) to remember them, and not really understanding them sufficiently.

We then discussed the problem of evil: if God is all-good and all-powerful, why is there suffering and evil? This is a common objection to the existence of God (at least a good God), but my assertion was that this actually doesn’t function as an argument against the existence of God, since it necessarily assumes that God already exists in order to be meaningful. It is essentially an “interior” issue within the belief system, not an objection from outside of it. In any case, it is a problem, and there are a couple of responses. A common response is the free-will argument: God is not ultimately responsible for evil–human beings are. It was good for God to give humans free will, but humans misuse that free will, and therefore evil exists. There are a couple of problems with this argument. First, it fails to explain natural evil and suffering–natural disasters, disease, etc. Second, a potential loophole is the question, well wasn’t it up to God to give people free will? And in doing so, didn’t he know that people would commit evil, and therefore isn’t he ultimately still responsible for evil? There are some responses to this. Greg Boyd argues a position called “open theism” in which case God is said, in a way, not to know. This view is obviously problematic with a lot of what we see in scripture and with much theology about God’s omniscience and foreknowledge, but it is one potential response. A second response to the problem of evil is to identify a couple other problems with the argument itself, and those are that it assumes: 1) the allowance for suffering is not good, and 2) we as humans have a viewpoint comprehensive and objective enough to know that the universe would have been better had it been created without the inclusion of or potentiality for evil. Both of these assumptions are contestable. If there is a God, we are not that God, therefore we have no grounds on which to judge the act of creation, nor do we therefore have grounds on which to say that the inclusion of suffering in its design is ultimately an evil thing. The Bible itself (see the Book of Job) seems to say as much. This response is disconcerting–it does not easily satisfy the cognitive and emotional dissonance we feel over this issue. But I find it compelling, and see that it is sufficient as a rebuttal to the argument.

We left off here, and revisited the topic of artificial intelligence. The question this time was whether or not artificial intelligence could be said to be conscious, and whether or not it could be said to act deliberately. Aristotle separates human beings from the animals using this as one of his criteria: that human beings are deliberate agents. As such, it seemed to us that artificial intelligence does not meet this criteria. Whatever “deliberation” occurs is predetermined by its programming, and so artificial intelligence is simply following its programming to a predetermined end. The goal of artificial intelligence is ultimately the execution of its code. Human beings have a different sort of end or goal (telos in Aristotelian language), and seem to have a different level of awareness of the sensory data taken in and analyzed. Artificial intelligence only responds to stimulus; human beings deliberate.

The final topic of interest was that of romantic love, and whether an addiction to romantic love might qualify as a type of disorder, such as might be categorized in the DSM. First we had to delineate what constitutes romantic love versus other types of love (such as long-term relational love, the love of beauty, or a love of ice cream, for example), and we seemed to settle on “romantic love” as defined by the chemical responses in the brain that create the emotional “high” that someone gets at the onset of a new romantic relationship, for example. Then we determined that such a chemical response is not properly called an addiction yet–the chemicals in themselves don’t necessitate addiction. Addiction is a response to the chemicals, not the chemicals themselves. We all seemed to agree that this is at least a possibility, though not a reality for some, if not most people who experience romantic love.

Then, we raised the larger question: what merits inclusion in the DSM? We wondered if the inclusion of something in the DSM, as a “disorder,” isn’t arbitrary. What is the gold standard, the “average” person against which all of these disorders are weighed? We pointed to the fact that many disorders have the potential to be constructive (such as ADD properly managed), and that some have been removed (homosexuality used to be classified in the DSM). How is it that we know a certain facet of human behavior is a dis-order? We also pointed to the problematic nature of diagnosis itself. Someone might exhibit the symptoms of a disorder, but yet not have that disorder. It seems that in some cases, it’s the assumption that the person has the disorder that leads to the later diagnosis–confirmation bias. Sara told the story of a man in prison exhibiting the signs of a psychopath who could not escape diagnosis.

This was a particularly rich discussion, and I’m missing some of the details. We left off with some suggestions of potential topics for next time: substance dualism, and the everlasting problem of morality and ethics, which is almost as big a problem as that there were still no snacks.

What is it like to be a bat?

Thomas Nagel’s now-famous article about the impossibility of using a reductionist approach to understanding human consciousness.

We appear to be faced with a general difficulty about psychophysical reduction. In other areas the process of reduction is a move in the direction of greater objectivity, toward a more, accurate view of the real nature of things. This is accomplished by reducing our dependence on individual or species-specific points of view toward the object of investigation. We describe it not in terms of the impressions it makes on our senses, but in terms of its more general effects and of properties detectable by means other than the human senses. The less it depends on a specifically human viewpoint, the more objective is our description. It is possible to follow this path because although the concepts and ideas we employ in thinking about the external world are initially applied from a point of view that involves our perceptual apparatus, they are used by us to refer to things beyond themselves—toward which we have the phenomenal point of view. Therefore we can abandon it in favor of another, and still be thinking about the same things.

Experience itself however, does not seem to fit the pattern. The idea of moving from appearance to reality seems to make no sense here. What is the analogue in this case to pursuing a more objective understanding of the same phenomena by abandoning the initial subjective viewpoint toward them in favour of another that is more objective but concerns the same thing? Certainly it appears unlikely that we will get closer to the real nature of human experience by leaving behind the particularity of our human point of view and striving for a description in terms accessible to beings that could not imagine what it was like to be us. If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objectivity—that is, less attachment to a specific viewpoint—does not take us nearer to the real nature of the phenomenon: it takes us farther away from it.

http://www.philosopher.eu/others-writings/nagel-what-is-it-like-to-be-a-bat/

The Atomic Bomb, Artificial Intelligence, and the Forms

A small group today (only three of us), but a discussion no less lively than usual. We began talking about the atomic bomb in the context of an essay prompt for a history class. The question asks, was dropping the atomic bomb on Hiroshima and Nagasaki justified? The arguments in favor seem to rely on speculation– specifically, that, had the bomb not been dropped, the body count would have been higher in the event of a more traditional invasion/attack and/or the continuation of the war. Others rely on the assumption that the Japanese would not have responded to a demonstration or a test, and that direct action without prior warning was the only effective way to send the message. The problem with these arguments is that they are wildly probabilistic, and there is no way to really know what “might have happened” in an alternate history. As we discussed last week, these sorts of projections and predictions are seldom justifiable. The unfortunate fact, though, is that these sorts of decisions have to be made. The arguments against the dropping of the bomb appeal more to morality, absolute standards by which a country should or should not act during time of war. These, however, seem arbitrary. Who’s to say which lives are more valuable than others? (Are American lives more valuable than Japanese? Are civilian lives more important than soldiers’?) It is possible to say that warfare in general is immoral, and then anything done in the context of warfare is automatically immoral to begin with, so we can say nothing either way about the dropping of the atomic bomb.

As to possible approaches to a paper, I suggested perhaps beginning with a strong “if”–if a certain premise or set of premises is true, then certain conclusions will necessarily follow, in regards to morality, prediction, or whatever. But in the end, it all depends on our acceptance of the initial premise or set of premises. Or, one might begin by putting the burden of proof on a certain party, and then show how that party has failed to prove its standpoint.

We discussed the nature of debate and argument, noting that some people seem less interested in having a dialogue than merely having a forum to express their own opinion. We can all be violent, aggressive defenders of our own viewpoints, failing to respond reasonably to someone else’s argument. The problem, for me, arises when that person does not really present an argument, nor seem interested in listening to one. We must always remember to be civil with people, even if they disagree with us, and even if they fail to go about argument and debate in a way that we think they should.

From there, the conversation took a turn to talking about artificial intelligence. Apparently, Google has created AI that can hold conversations–conversations that are mostly nonsensical, but that sometimes stumble on something philosophical, such as what constitutes “art,” or if one of them is, in fact, a human. We discussed whether or not AI could be said to be “conscious,” which of course depends on what our definition of consciousness is. Is something conscious if it could be like something to be that thing? (See Nagel’s “What is it to be a bat”)

I posed the question: given the proper amount of time and technological advancement, could an artificial intelligence ever be said to constitute a human consciousness? This raised another question: could an artificial intelligence, given time, learn everything, and therefore know everything, that a human knows? What is it to learn something, and what is it to know something?

From here, the conversation took several convoluted turns, involving an illustration featuring an adopted child living alone in a room painted blue, introduced to another human being (me) for the first time at the age of fifteen. Obviously such experiments are impractical to actually perform, but even as a thought experiment it was difficult to wrap our minds around. How much would this child “know,” beyond what we might say is mere instinct? An AI can be taught to avoid damage to itself, but how does it know what might constitute damage, given the myriad things it could encounter? Might it mistakenly think that a firm handshake is portentous of damage? Whereas a human being already intuitively understands what sorts of things constitute damage, and what don’t (even if they involve injury).

I posed the argument that there are certain things that we “know” that are not “learned” from experience, and that are not also mere instinct. This is what I understand to be part of what Plato talks about when he talks about the forms. I’m not an expert in this, so I may be mistaken. And this argument was lengthy and met many objections, so I will summarize rather than try to trace the actual direction of the discussion. My illustration was that, for example, when we learn the ad hominem fallacy of argument, what we are learning is only the term–the concept is something that we already know. We had likely encountered ad hominem before, so we knew what it was conceptually, and we also knew that it was wrong. But even if we had never encountered it before, we would still know, on the first encounter, that it was wrong, just as we would recognize that 1 + 1 = 3 is wrong (arbitrary mathematical language and symbols aside). We did not have to learn that–in fact, this is part of the construal by which we learn new things: for example, if we are presented with an argument that contains the ad hominem, we know to question its conclusion, and therefore we choose what to “learn” about it, if anything. So in “learning” ad hominem, all we are actually learning is the term for it–like how we learn that the word “blue” denotes the color that we experience as blue. Even if we are taught that the color blue is actually called “orange,” or that the ad hominem is actually called the “gobbledygook,” the knowledge of the thing in itself is something that we already have. So in terms of Plato’s forms, the ad hominem is something we accept as true because we already have an idea of what truth looks like, or is. It fits the “form” of truth, as we understand it, and in order to recognize that, we did not have to learn anything. We already knew it. So this theory of knowledge states that much of what we “learn” is actually just “recovered.”

This all comes into conflict with the notion that we, as human beings, are all just a bundle of organic matter wired with a set of instincts. We could call our sense of forms a kind of “instinct” but that seems to run up against the definition of “instinct” as it is commonly understood, and seems an inadequate explanation. It also runs up against common models of knowledge: we don’t know something, then we learn it, then we know it. It seems like there are certain types of knowledge–such as that water is composed of hydrogen and oxygen–that conform to these models, but that there are also types of knowledge (or perhaps better called “beliefs”?) that do not–such as that the ad hominem presents a fallacious argument, an unacceptable means for uncovering truth.

The final word on this, other than suggesting that AI might never be said to achieve a state comparable to human consciousness because it cannot have access to this sense of the “forms”, was that, given the fact that there are things that we can “know” that are not “learned,” at least empirically, might this have some bearing on the field of ethics?

Sara seemed distressed, but I think it was a good exercise in considering another worldview, one that possibly makes better sense of things. Sometimes Philosophy Club is unproductive, but this one felt immensely productive. Except for the fact that there were no snacks. And on that count, it was an utter failure.