The Atomic Bomb, Artificial Intelligence, and the Forms

A small group today (only three of us), but a discussion no less lively than usual. We began talking about the atomic bomb in the context of an essay prompt for a history class. The question asks, was dropping the atomic bomb on Hiroshima and Nagasaki justified? The arguments in favor seem to rely on speculation– specifically, that, had the bomb not been dropped, the body count would have been higher in the event of a more traditional invasion/attack and/or the continuation of the war. Others rely on the assumption that the Japanese would not have responded to a demonstration or a test, and that direct action without prior warning was the only effective way to send the message. The problem with these arguments is that they are wildly probabilistic, and there is no way to really know what “might have happened” in an alternate history. As we discussed last week, these sorts of projections and predictions are seldom justifiable. The unfortunate fact, though, is that these sorts of decisions have to be made. The arguments against the dropping of the bomb appeal more to morality, absolute standards by which a country should or should not act during time of war. These, however, seem arbitrary. Who’s to say which lives are more valuable than others? (Are American lives more valuable than Japanese? Are civilian lives more important than soldiers’?) It is possible to say that warfare in general is immoral, and then anything done in the context of warfare is automatically immoral to begin with, so we can say nothing either way about the dropping of the atomic bomb.

As to possible approaches to a paper, I suggested perhaps beginning with a strong “if”–if a certain premise or set of premises is true, then certain conclusions will necessarily follow, in regards to morality, prediction, or whatever. But in the end, it all depends on our acceptance of the initial premise or set of premises. Or, one might begin by putting the burden of proof on a certain party, and then show how that party has failed to prove its standpoint.

We discussed the nature of debate and argument, noting that some people seem less interested in having a dialogue than merely having a forum to express their own opinion. We can all be violent, aggressive defenders of our own viewpoints, failing to respond reasonably to someone else’s argument. The problem, for me, arises when that person does not really present an argument, nor seem interested in listening to one. We must always remember to be civil with people, even if they disagree with us, and even if they fail to go about argument and debate in a way that we think they should.

From there, the conversation took a turn to talking about artificial intelligence. Apparently, Google has created AI that can hold conversations–conversations that are mostly nonsensical, but that sometimes stumble on something philosophical, such as what constitutes “art,” or if one of them is, in fact, a human. We discussed whether or not AI could be said to be “conscious,” which of course depends on what our definition of consciousness is. Is something conscious if it could be like something to be that thing? (See Nagel’s “What is it to be a bat”)

I posed the question: given the proper amount of time and technological advancement, could an artificial intelligence ever be said to constitute a human consciousness? This raised another question: could an artificial intelligence, given time, learn everything, and therefore know everything, that a human knows? What is it to learn something, and what is it to know something?

From here, the conversation took several convoluted turns, involving an illustration featuring an adopted child living alone in a room painted blue, introduced to another human being (me) for the first time at the age of fifteen. Obviously such experiments are impractical to actually perform, but even as a thought experiment it was difficult to wrap our minds around. How much would this child “know,” beyond what we might say is mere instinct? An AI can be taught to avoid damage to itself, but how does it know what might constitute damage, given the myriad things it could encounter? Might it mistakenly think that a firm handshake is portentous of damage? Whereas a human being already intuitively understands what sorts of things constitute damage, and what don’t (even if they involve injury).

I posed the argument that there are certain things that we “know” that are not “learned” from experience, and that are not also mere instinct. This is what I understand to be part of what Plato talks about when he talks about the forms. I’m not an expert in this, so I may be mistaken. And this argument was lengthy and met many objections, so I will summarize rather than try to trace the actual direction of the discussion. My illustration was that, for example, when we learn the ad hominem fallacy of argument, what we are learning is only the term–the concept is something that we already know. We had likely encountered ad hominem before, so we knew what it was conceptually, and we also knew that it was wrong. But even if we had never encountered it before, we would still know, on the first encounter, that it was wrong, just as we would recognize that 1 + 1 = 3 is wrong (arbitrary mathematical language and symbols aside). We did not have to learn that–in fact, this is part of the construal by which we learn new things: for example, if we are presented with an argument that contains the ad hominem, we know to question its conclusion, and therefore we choose what to “learn” about it, if anything. So in “learning” ad hominem, all we are actually learning is the term for it–like how we learn that the word “blue” denotes the color that we experience as blue. Even if we are taught that the color blue is actually called “orange,” or that the ad hominem is actually called the “gobbledygook,” the knowledge of the thing in itself is something that we already have. So in terms of Plato’s forms, the ad hominem is something we accept as true because we already have an idea of what truth looks like, or is. It fits the “form” of truth, as we understand it, and in order to recognize that, we did not have to learn anything. We already knew it. So this theory of knowledge states that much of what we “learn” is actually just “recovered.”

This all comes into conflict with the notion that we, as human beings, are all just a bundle of organic matter wired with a set of instincts. We could call our sense of forms a kind of “instinct” but that seems to run up against the definition of “instinct” as it is commonly understood, and seems an inadequate explanation. It also runs up against common models of knowledge: we don’t know something, then we learn it, then we know it. It seems like there are certain types of knowledge–such as that water is composed of hydrogen and oxygen–that conform to these models, but that there are also types of knowledge (or perhaps better called “beliefs”?) that do not–such as that the ad hominem presents a fallacious argument, an unacceptable means for uncovering truth.

The final word on this, other than suggesting that AI might never be said to achieve a state comparable to human consciousness because it cannot have access to this sense of the “forms”, was that, given the fact that there are things that we can “know” that are not “learned,” at least empirically, might this have some bearing on the field of ethics?

Sara seemed distressed, but I think it was a good exercise in considering another worldview, one that possibly makes better sense of things. Sometimes Philosophy Club is unproductive, but this one felt immensely productive. Except for the fact that there were no snacks. And on that count, it was an utter failure.


Of Human Flourishing and Human Sacrifice

Seven of us total today, with a guest guide dog in training. So naturally the conversation began revolving around the ethics of animal abuse. The conversation took a familiar trajectory. After considering the reasons why abuse is wrong–it causes pain, dogs are sentient, etc.–all reasons were countered by some hypothetical situation. What if one cannot feel pain, and what difference does sentience make?

The question arose if consent makes something ethical when it otherwise wouldn’t be. For instance, human sacrifice–if one consents to being a human sacrifice, does that make it wrong then for those conducting whatever ritual to sacrifice that person? Answers went along the lines of: as long as those who are affected also consent. Those who are affected might include parents, friends, etc. But then that raised the question of the butterfly effect–these effects spread from person to person like ripples in a pond; it is impossible to know who might be affected and how. It is also impossible to know what the world might have been like had the person who consented to ritual sacrifice not been sacrificed. Might they have grown up to be an influential person? (Might they also have grown up to become Hitler?) There are similar arguments against abortion.

This in turn raised the question of to what extent we can use our powers for predicting the future as grounds for making any decision at all. My purpose in raising this question was to try to steer us in a more existential direction, i.e. Sartre, in making the claim that we are enslaved to freedom because, while we have no idea the ramifications of the choices we make, we are enslaved to freedom–we must make choices, and we are responsible for the choices that we make.

I can’t remember the exact transition, but this got us talking about human flourishing–the question being, how can we know that the decisions we make will promote human flourishing. Aramis made the claim–and here I am going from memory and trying not to mangle his intended argument–that we as people can look at history and discern how best to promote human flourishing. I took issue with this at on number of points, and here the conversation took several convoluted turns, so I will attempt to summarize the conclusions rather than follow the conversation as it happened. One primary objection was that human flourishing is a relative term. Different people at different times and places have different definitions of human flourishing. Granted that there seem to be common threads throughout history, but it did not seem conclusive enough to form a concrete definition. Even basic claims like “death=bad for human flourishing” and “happiness=good for human flourishing” met with potential objections. (Japanese samurai performing seppuku was seen as beneficial on the whole in terms of justice, and some versions of radical Islam which value suicide; and the relativity of happiness, as in some people actually find sadness and depression to be a comfort, or find joy in cruelty.) The second major objection was that interpreting history is not an objective exercise–the facts do not present themselves in such a way that we can automatically construct a definition of what human flourishing is or determine a means for achieving it. My argument was that to interpret history, one will necessarily start with some presuppositions. For example, one person may look at the story of Hitler and the holocaust and think, “what an atrocity, millions of lives lost,” while another may look at the same event and think, “how close we came to achieving true human flourishing; if only Hitler had not made such grievous tactical mistakes.” We seemed to agree that determining what parts of history indicated human flourishing and which parts didn’t depended on something other than the mere facts of history themselves. What human flourishing is is not self-evident.

As usual, nothing was solved, and the question still remains: is there a definition of human flourishing, and is there some means for achieving it? How are we to know given the fact that we can’t predict the future, and that we can’t interpret the past without already having some idea of what we’re looking for?

We mused that, in these meetings, it seems that there is always one person making claims and argument, and the rest play the role of the skeptic, shooting them down. Why is it so much easier to be skeptical than to make positive argument? Philosophy Club is not for the thin-skinned. I am grateful for members like Aramis who are up to playing hardball.

The final issue raised was evolution, which we decided we would save for another time. We talked briefly and informally about the nature of Philosophy Club in general, things that should or shouldn’t be discussed during the meetings. Last year, some of the meetings seemed to turn into group therapy sessions. Agreed–not that group therapy is not useful, but it is probably not what Philosophy Club is or should be about.

A lively meeting, though ultimately unproductive. (But aren’t they all?)