Category Archives: consciousness

The Other Minds

My dog loves me. Despite his creaking hips and back, he heaves himself up and comes to greet me when I return home each night, with his tail wagging. Yet I wonder if I am right about his feelings about me. After all, I am just interpreting his behavior as representative of mental and emotional states which I would have in similar circumstances. And, he has been bred over centuries to be a veritable human-pleasing machine which exhibits a set of behaviors that, among other things, is calculated to make me feel that he feels like I am the best thing since kibbles. Come to think of it, he does not wag his tail while he eats, and he never met a kibble he didn’t love.

If only he could tell me that he loves me, then I would know for sure. On second thought, I could not know for sure. I can’t even know for sure when another human reports their feelings or perceptions or any other personal, qualitative aspect of their experience to me. In any such case, the experience that I attribute to their report may be radically different from what they are actually experiencing. At least, that’s what the Inverted Spectrum teaches us.

The Inverted Spectrum is a thought experiment. It was not devised to tackle the problem of other minds. It was devised to demonstrate the ethereal nature of qualitative properties. But like any good thought experiment, it illustrates multiple aspects of the target issue.

Here’s how it goes: Imagine that you have a best friend named Fred, who you have known since you both could walk. Unbeknownst to you however, whenever you both look at something red, Fred does not see red, he sees green instead. This is not to say that Fred is color blind. On the contrary, he sees all the colors that you see, and he quite happily calls the red object “red”. He just sees it as green. The two of you could go through your entire lives discussing painting and picking out Granny Smiths instead of Red Delicious at the grocery store, without a hitch. The basic qualities “red” and “green” do not influence function; we happily operate the same way with the qualities flipped.

The implications of the Inverted Spectrum may seem bizarre, dramatic and disturbing, but closer examination may shrink the menace. If I assign you and Fred to sort red and green beads into separate boxes, the two of you will complete the task in no time with no mistakes. That’s because what we all call “red” designates the same set of beads, even though they produce in Fred what you or I would call a “green” experience. To take it a little further, if I assign the two of you to tell me the color of sour things, sweet things, hot things, dangerous things or growing things, you and Fred will give me the same answers in French, English, Fulani, or even just by pointing. All secondary associations are flipped along with the reds and greens.

The jolt from this thought experiment comes when we imagine our experience of Fred’s experience, with all of our secondary associations still in place. But that’s completely off base. What we have run down with this thought experiment is an account of Fred’s experience with all his own secondary associations attached. The point is that there is some irreducible personal element to it all. But then, where does that leave Fred’s “red” or his “green” or his any other what-it-is-like aspect of experience?

Having seen what it is like to see what it is like to experience what Fred sees from your viewpoint, you may have trouble explaining your horror to him. You will insist that the apple is red, as are hot things and dangerous things, and he will heartily agree. You can desperately insist that he is deluded and is pervasively mistaking red qualities for green ones. He will reply that he is not and will ask you to prove it, which, as the thought experiment demonstrates, you cannot. What remains to his personal, qualitative experience, stripped of all the secondary associations, is just its personalness.

If you were to truly step into Fred’s skin with all its secondary associations in place and your own secondary associations set aside, you would have to admit that Fred’s “red” is indeed red; it is just not your red.

My dog may be an automaton. He may be a human-pleasing machine who wags his tail on the basis of a genetic algorithm and just acts in a very convincing way, like he means it. But if so, as the Inverted Spectrum illustrates, he does mean it, just as Fred really means red when he says “red”. All the secondary associations are in place. I may rightly conjecture that what it may be like to be him may not be what it is like to be me, but I knew that before he wagged his tail. He loves me, as sure as I know what love is.

Tagged , , , , , , ,

A Few, Very Specific Things

People are always asking me, “What do you believe?”

Nah, nobody actually asks me that, but I tell them anyway, just like this:

When it comes to consciousness, it comes with identity, and therefore locality.

When it comes to cause, it causes identity.

And, of course, that’s about it for classical theism.

Tagged , , ,

AI, Determinism and Supervenience

Recently, NPR interviewed the man who “de-aged” Robert De Niro for the film, “The Irishman”. The conversation drifted into the metaphysics of computer-generated imaging:

GARCIA-NAVARRO: And what you’re describing is a technology that’s only going to get better and better, which I think brings up some ethical issues because as the technology gets more seamless and commonplace and those likenesses that you’ve just described get more subtle, could we end up doing away with the actual actor altogether? I mean, could it come to a point where a studio owns the digital image of an actor and just uses that instead of the real thing?

HELMAN: I don’t think so ’cause the performance has to come from somewhere, and that has to be the actor. And so just think about what it’ll take for a computer to do what Robert De Niro does. You need to train the computer – right? – to do those kinds of things. And basically, if you think about the behavior or likeness of somebody, how do you become yourself? You become yourself by living, you know, by having a bunch of experiences. And then you also have all the connections that are made in your face, the way you smile, all the cultural things that you live.

So if you want a computer to act like Robert De Niro, you need to train the computer like Robert De Niro. And then you spend a lifetime, you know, basically training the computer. And for that, you might as well just use Robert De Niro, you know?

Quite right, and more generally correct than I suspect Mr. Helman intended. Explanations pertain to the past, where things could not have been otherwise. The future will always be the realm of probability, else, to echo Mr. Helman, it will already have been.

The supervenient identity specifies its base. Otherwise, there is no explanation of the identity by the base. The point is: there are no loose ends – no theoretical De Niro’s.

Tagged , , , , , ,