Addicted to Love (and Apologetics Continued)

We picked up essentially where we left off, launching right into the arguments for the existence of God. Steven had done some outside research, looking into Aquinas’ arguments. The one he found most compelling was the first cause argument: that everything has a cause, but there must have been something at first without a cause. This led into a discussion of Aristotle’s four causes, since many of these arguments rest on Aristotelian metaphysics. Aristotle’s efficient cause is what we normally think of to be “cause” in our contemporary understanding, but there were three others that are also used to frame arguments for God. Our understanding of these arguments is necessarily impoverished because we don’t subscribe to these metaphysics. I will post some resources on these later, since our discussion mostly consisted of me trying (badly) to remember them, and not really understanding them sufficiently.

We then discussed the problem of evil: if God is all-good and all-powerful, why is there suffering and evil? This is a common objection to the existence of God (at least a good God), but my assertion was that this actually doesn’t function as an argument against the existence of God, since it necessarily assumes that God already exists in order to be meaningful. It is essentially an “interior” issue within the belief system, not an objection from outside of it. In any case, it is a problem, and there are a couple of responses. A common response is the free-will argument: God is not ultimately responsible for evil–human beings are. It was good for God to give humans free will, but humans misuse that free will, and therefore evil exists. There are a couple of problems with this argument. First, it fails to explain natural evil and suffering–natural disasters, disease, etc. Second, a potential loophole is the question, well wasn’t it up to God to give people free will? And in doing so, didn’t he know that people would commit evil, and therefore isn’t he ultimately still responsible for evil? There are some responses to this. Greg Boyd argues a position called “open theism” in which case God is said, in a way, not to know. This view is obviously problematic with a lot of what we see in scripture and with much theology about God’s omniscience and foreknowledge, but it is one potential response. A second response to the problem of evil is to identify a couple other problems with the argument itself, and those are that it assumes: 1) the allowance for suffering is not good, and 2) we as humans have a viewpoint comprehensive and objective enough to know that the universe would have been better had it been created without the inclusion of or potentiality for evil. Both of these assumptions are contestable. If there is a God, we are not that God, therefore we have no grounds on which to judge the act of creation, nor do we therefore have grounds on which to say that the inclusion of suffering in its design is ultimately an evil thing. The Bible itself (see the Book of Job) seems to say as much. This response is disconcerting–it does not easily satisfy the cognitive and emotional dissonance we feel over this issue. But I find it compelling, and see that it is sufficient as a rebuttal to the argument.

We left off here, and revisited the topic of artificial intelligence. The question this time was whether or not artificial intelligence could be said to be conscious, and whether or not it could be said to act deliberately. Aristotle separates human beings from the animals using this as one of his criteria: that human beings are deliberate agents. As such, it seemed to us that artificial intelligence does not meet this criteria. Whatever “deliberation” occurs is predetermined by its programming, and so artificial intelligence is simply following its programming to a predetermined end. The goal of artificial intelligence is ultimately the execution of its code. Human beings have a different sort of end or goal (telos in Aristotelian language), and seem to have a different level of awareness of the sensory data taken in and analyzed. Artificial intelligence only responds to stimulus; human beings deliberate.

The final topic of interest was that of romantic love, and whether an addiction to romantic love might qualify as a type of disorder, such as might be categorized in the DSM. First we had to delineate what constitutes romantic love versus other types of love (such as long-term relational love, the love of beauty, or a love of ice cream, for example), and we seemed to settle on “romantic love” as defined by the chemical responses in the brain that create the emotional “high” that someone gets at the onset of a new romantic relationship, for example. Then we determined that such a chemical response is not properly called an addiction yet–the chemicals in themselves don’t necessitate addiction. Addiction is a response to the chemicals, not the chemicals themselves. We all seemed to agree that this is at least a possibility, though not a reality for some, if not most people who experience romantic love.

Then, we raised the larger question: what merits inclusion in the DSM? We wondered if the inclusion of something in the DSM, as a “disorder,” isn’t arbitrary. What is the gold standard, the “average” person against which all of these disorders are weighed? We pointed to the fact that many disorders have the potential to be constructive (such as ADD properly managed), and that some have been removed (homosexuality used to be classified in the DSM). How is it that we know a certain facet of human behavior is a dis-order? We also pointed to the problematic nature of diagnosis itself. Someone might exhibit the symptoms of a disorder, but yet not have that disorder. It seems that in some cases, it’s the assumption that the person has the disorder that leads to the later diagnosis–confirmation bias. Sara told the story of a man in prison exhibiting the signs of a psychopath who could not escape diagnosis.

This was a particularly rich discussion, and I’m missing some of the details. We left off with some suggestions of potential topics for next time: substance dualism, and the everlasting problem of morality and ethics, which is almost as big a problem as that there were still no snacks.

The Atomic Bomb, Artificial Intelligence, and the Forms

A small group today (only three of us), but a discussion no less lively than usual. We began talking about the atomic bomb in the context of an essay prompt for a history class. The question asks, was dropping the atomic bomb on Hiroshima and Nagasaki justified? The arguments in favor seem to rely on speculation– specifically, that, had the bomb not been dropped, the body count would have been higher in the event of a more traditional invasion/attack and/or the continuation of the war. Others rely on the assumption that the Japanese would not have responded to a demonstration or a test, and that direct action without prior warning was the only effective way to send the message. The problem with these arguments is that they are wildly probabilistic, and there is no way to really know what “might have happened” in an alternate history. As we discussed last week, these sorts of projections and predictions are seldom justifiable. The unfortunate fact, though, is that these sorts of decisions have to be made. The arguments against the dropping of the bomb appeal more to morality, absolute standards by which a country should or should not act during time of war. These, however, seem arbitrary. Who’s to say which lives are more valuable than others? (Are American lives more valuable than Japanese? Are civilian lives more important than soldiers’?) It is possible to say that warfare in general is immoral, and then anything done in the context of warfare is automatically immoral to begin with, so we can say nothing either way about the dropping of the atomic bomb.

As to possible approaches to a paper, I suggested perhaps beginning with a strong “if”–if a certain premise or set of premises is true, then certain conclusions will necessarily follow, in regards to morality, prediction, or whatever. But in the end, it all depends on our acceptance of the initial premise or set of premises. Or, one might begin by putting the burden of proof on a certain party, and then show how that party has failed to prove its standpoint.

We discussed the nature of debate and argument, noting that some people seem less interested in having a dialogue than merely having a forum to express their own opinion. We can all be violent, aggressive defenders of our own viewpoints, failing to respond reasonably to someone else’s argument. The problem, for me, arises when that person does not really present an argument, nor seem interested in listening to one. We must always remember to be civil with people, even if they disagree with us, and even if they fail to go about argument and debate in a way that we think they should.

From there, the conversation took a turn to talking about artificial intelligence. Apparently, Google has created AI that can hold conversations–conversations that are mostly nonsensical, but that sometimes stumble on something philosophical, such as what constitutes “art,” or if one of them is, in fact, a human. We discussed whether or not AI could be said to be “conscious,” which of course depends on what our definition of consciousness is. Is something conscious if it could be like something to be that thing? (See Nagel’s “What is it to be a bat”)

I posed the question: given the proper amount of time and technological advancement, could an artificial intelligence ever be said to constitute a human consciousness? This raised another question: could an artificial intelligence, given time, learn everything, and therefore know everything, that a human knows? What is it to learn something, and what is it to know something?

From here, the conversation took several convoluted turns, involving an illustration featuring an adopted child living alone in a room painted blue, introduced to another human being (me) for the first time at the age of fifteen. Obviously such experiments are impractical to actually perform, but even as a thought experiment it was difficult to wrap our minds around. How much would this child “know,” beyond what we might say is mere instinct? An AI can be taught to avoid damage to itself, but how does it know what might constitute damage, given the myriad things it could encounter? Might it mistakenly think that a firm handshake is portentous of damage? Whereas a human being already intuitively understands what sorts of things constitute damage, and what don’t (even if they involve injury).

I posed the argument that there are certain things that we “know” that are not “learned” from experience, and that are not also mere instinct. This is what I understand to be part of what Plato talks about when he talks about the forms. I’m not an expert in this, so I may be mistaken. And this argument was lengthy and met many objections, so I will summarize rather than try to trace the actual direction of the discussion. My illustration was that, for example, when we learn the ad hominem fallacy of argument, what we are learning is only the term–the concept is something that we already know. We had likely encountered ad hominem before, so we knew what it was conceptually, and we also knew that it was wrong. But even if we had never encountered it before, we would still know, on the first encounter, that it was wrong, just as we would recognize that 1 + 1 = 3 is wrong (arbitrary mathematical language and symbols aside). We did not have to learn that–in fact, this is part of the construal by which we learn new things: for example, if we are presented with an argument that contains the ad hominem, we know to question its conclusion, and therefore we choose what to “learn” about it, if anything. So in “learning” ad hominem, all we are actually learning is the term for it–like how we learn that the word “blue” denotes the color that we experience as blue. Even if we are taught that the color blue is actually called “orange,” or that the ad hominem is actually called the “gobbledygook,” the knowledge of the thing in itself is something that we already have. So in terms of Plato’s forms, the ad hominem is something we accept as true because we already have an idea of what truth looks like, or is. It fits the “form” of truth, as we understand it, and in order to recognize that, we did not have to learn anything. We already knew it. So this theory of knowledge states that much of what we “learn” is actually just “recovered.”

This all comes into conflict with the notion that we, as human beings, are all just a bundle of organic matter wired with a set of instincts. We could call our sense of forms a kind of “instinct” but that seems to run up against the definition of “instinct” as it is commonly understood, and seems an inadequate explanation. It also runs up against common models of knowledge: we don’t know something, then we learn it, then we know it. It seems like there are certain types of knowledge–such as that water is composed of hydrogen and oxygen–that conform to these models, but that there are also types of knowledge (or perhaps better called “beliefs”?) that do not–such as that the ad hominem presents a fallacious argument, an unacceptable means for uncovering truth.

The final word on this, other than suggesting that AI might never be said to achieve a state comparable to human consciousness because it cannot have access to this sense of the “forms”, was that, given the fact that there are things that we can “know” that are not “learned,” at least empirically, might this have some bearing on the field of ethics?

Sara seemed distressed, but I think it was a good exercise in considering another worldview, one that possibly makes better sense of things. Sometimes Philosophy Club is unproductive, but this one felt immensely productive. Except for the fact that there were no snacks. And on that count, it was an utter failure.