Searle’s Chinese room

by look i have opinions

There are so many problems with this argument. I was going to write about how wrong I think it is, but the more I think about it, the more I think the real problem is not that it’s wrong but that it’s muddle-headed. No wonder Searle claims to have gotten such divergent responses from different people. If the argument were clear, then different people would tend to focus on the same points.

From the start, Searle’s thought experiment muddles the issue by confounding concepts that ought to be kept distinct:

  • He treats the understanding of a Chinese text as the equivalent of understanding Chinese. A moment’s thought makes it obvious that either can exist without the other.
  • He treats understanding as a proxy for consciousness itself. This makes some sense, since consciousness is a prerequisite of understanding, but Searle goes on to imply that where there is no understanding, there is no consciousness—which is nonsense.
  • He assumes that a non-conscious machine trying to use a human language is analogous to a human trying to use a foreign language. This strikes me as both dubious and way more convoluted than necessary.

Most significantly, Searle says, “One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on” (emphasis added), and immediately after that, he identifies himself with the non-Chinese-speaking human locked in the room, rather than with the room as a whole. Which means Searle intends to model the workings of his own mind as though those workings took place primarily outside of his own consciousness. Well, to be fair, he’s not modeling the way he thinks his mind works at all; he’s modeling the way “strong AI” proponents think the mind works.

But by imagining a mostly redundant human processor surrounded by loads of dull paperwork, Searle quietly assumes the very thing he intends to demonstrate: that the program’s model of the mind is really a model of something superficial, not something that delves deep into the ways people think.

I can think of two ways to clear up the above confusions.

1. One way is to restrict the discussion to a much narrower question. What does it mean for a non-Chinese speaker to understand a Chinese text? If Searle stays in his Chinese room for a long time and studies his instructions thoroughly, will he come to understand the characters he’s reading?

Well, he won’t learn that ideogram #1 refers to a man and ideogram #2 refers to a restaurant, which is what we normally mean by understanding Chinese. But he will learn something significant about how #1 gets used in relation to #2. If he later receives a Chinese-English dictionary, he may feel a déjà vu-ish thrill of recognition, realizing that a certain set of symbols he’s been using (the restaurant script) describes exactly how people behave in restaurants. So yes, Searle can learn to understand what he’s reading in a very limited way.

But can we assume that Searle’s ability to understand is an accurate metric of how well the instructions model understanding? I don’t know. Even if we did make that assumption, it wouldn’t have much bearing on Searle’s real point, which is that a model of understanding is not sufficient for actual understanding. So this thought experiment doesn’t get us very far.

2. The second way is to remove the human processor altogether, replacing him with a machine processor. And why not? The room’s Chinese output is the result of rigidly followed rules. No human consciousness is needed.

But this version of the experiment is not a thought experiment at all, it’s a real one, and the “room” is merely a rules-based AI program running on a computer. So now we’re back where we started. Does this AI provide an accurate model of an aspect of consciousness? Is it conscious? Some say yes to one or both of these questions; Searle says no.

~~~

To make matters worse, Searle makes one more confusing assumption. Searle is willing, for the sake of argument, to pretend that a rules-based program can produce “outputs that are indistinguishable from those of the native Chinese speaker.” In real life, the programs he was talking about could only discuss a limited set of topics in a limited set of ways.

Searle is trying to argue that intelligent-seeming behavior does not indicate real understanding; therefore, he tries to make the hypothetical program’s behavior sound as intelligent as possible, in order that his argument will have the widest possible application. Rhetorically, this is a natural move, but logically, it’s confusing. Does Searle really think a non-conscious program could potentially be as clever and eloquent as a person? Probably not. Even if he did believe that, the assumption isn’t necessary to his argument.

~~~

In essence, Searle is making a reductio ad absurdum argument. Searle wants us to picture the juggling of rules and syntax, in all its dry mechanicality, and laugh at the idea of such a system ever producing consciousness, just like we laugh at the Weizenbaum “computer” made from “a roll of toilet paper and a pile of small stones.”

But a similar reductio ad absurdum could be made about a human brain, a skull-shaped room full of neurons that somehow produces fluent Chinese conversation. Searle doesn’t want to make the Chinese skull argument. He doesn’t think the brain can be reduced that way, and he explicitly says as much towards the end of the essay.

And yet, strangely, Searle thinks that a brain simulated by a program can be reduced to absurdity as easily as any other program. In the “Brain Simulator Reply” section, he finally gets to his main point: “As long as [the program] simulates only the formal structure of the sequence of neuron firings at the synapses, it won’t have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states.” This is the only distinction Searle makes between thinking machines (such as brains) and mere programs.

What makes Searle think a brain has “causal properties” that cannot belong to a formal structure? If by causal properties he means physical powers of causation, then I see no reason why a human in a locked room can’t have them, or a computer running a program. Searle never explains this assertion to my satisfaction. I consider it a statement of faith.

~~~

Searle deserves some slack because honestly, it’s very hard to talk clearly about these issues. Here’s something he says that I like a lot:

The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state.

Searle is right that we’re attributing something more than just the “processes” and “output” of consciousness. But I think he’s wrong that either of those things can exist without a cognitive state. We can imagine them existing without it, of course; we can just as easily imagine molecules vibrating rapidly without heat. Both those thought experiments prove nothing.

Can consciousness be created or accurately modeled using only high-level symbols like words and concepts? Searle says no, that sort of reverse engineering is too crude to give us anything but very simplified picture of how consciousness works. I’m pretty sure he’s right about that. But it’s more intuition than anything in his essay.

I realize there’s absolutely no call for another takedown of the Chinese room argument, by the way. I’m not even a computer scientist or a rhetorician or anything. I just needed to get this off my chest. Thank you.

Advertisements