the chinese room love

the chinese room

Definitions

Sorry, no definitions found. Check out and contribute to the discussion of this word!

Etymologies

Sorry, no etymologies found.

Support

Help support Wordnik (and make this page ad-free) by adopting the word the chinese room.

Examples

    Sorry, no example sentences found.

Comments

Log in or sign up to get involved in the conversation. It's quick and easy.

  • From Searle's Scientific American article (1990):

    "Consider a language you don't understand. In my case I do not understand Chinese. To me, Chinese writing looks like so many meaningless squiggles. Now suppose I am placed in a room containing baskets full of Chinese symbols. Suppose also that I am given a rule book in English for matching Chinese symbols with other Chinese symbols. The rules identify the symbols entirely by their shapes and do not require that I understand any of them. The rules might say such things as "take a squiggle-squiggle sign from basket number one and put it next to a squoggle-squoggle sign from basket number two."

    Imagine that people outside the room who understand Chinese hand in small bunches of symbols and that in response I manipulate the symbols according to the rule book and hand back more small bunches of symbols. Now the rule book is the "computer program". The people who wrote it are the "programmers", and I am the "computer". The baskets full of symbols are the "data base", the small bunches that are handed in to me are "questions" and the bunches I then hand out are "answers".

    Now suppose that the rule book is written in such a way that my "answers" to the "questions" are indistinguishable from those of a native Chinese speaker. For example, the people outside might hand me some symbols that, unknown to me mean, "What is your favorite color?" and I might after going through the rules give back symbols that, also unknown to me, mean "My favorite color is blue, but I also like green a lot." I satisfy the Turning Test for understanding Chinese. All the same I am totally ignorant of Chinese. And there is no way I could come to understand Chinese in the system as described, since there is no way that I can learn the meanings of any of the symbols. Like a computer, I manipulate symbols but I attach no meaning to the symbols.

    The point of the thought experiment is this: if I do not understand Chinese solely on the basis of running a computer program for understanding Chinese, then neither does any other digital computer solely on that basis. Digital computers merely manipulate formal symbols according to rules in the program.

    What goes for Chinese goes for other forms of cognition as well. Just manipulating the symbols is not by itself enough to guarantee cognition, perception, understanding, thinking and so forth. And since computers, qua computers, are symbol-manipulating devices, merely running the computer program is not enough to guarantee cognition.

    This simple argument is decisive against the claims of strong AI"

    Lakoff and Johnson's (condensed) analysis: "Metaphors included in the thought experiment: "I am the computer", "The rule book is the 'computer program'" but specifically excluded is the metaphor "My understanding of the rule book is the computer's understanding of the program." Hence, Searle does not conclude that the computer understands the program, just as he understands the rule book. Nor does he conclude that the computer understands that it is manipulating formal symbols, just as he understands that he is manipulating Chinese symbols. Searle has carefully constructed the Chinese Room metaphor so that none of the understanding is mapped onto the computer. All that is attributed to the computer is a lack of understanding.

    None of the following unconscious metaphors are mapped onto Searle's made-up thought experiment, though they are its foundation: the person in the Chinese Room understands English; understands the rule book; understands that he is in a room, and manipulating objects; understands that he does not understand the symbols.

    See p. 261-5 of "Philosophy in the Flesh"

    March 21, 2007

  • Wow, this is fascinating. More than I intended to read in a Wordie comment, mind you, but compelling enough that I kept going.

    I've thought quite a bit about this before, actually, the idea that a computer could both input and output information reasonably while oblivious to the meaning of the data. Specifically in relation to Asimov's ideas about the eventual need for "robot rights." Seems to me that a droid may someday appear entirely human to all observers, pass the Turing Test with flying colors, yet have no true understanding of anything. It's simply become really darn good at mimicking human behavior and parsing stimuli. What would you do if a robot told you it was experiencing pain when you disassembled it, even to the point of yelping convincingly? There's no reason to believe it's true, as the robot has no consciousness or soul, and yet it's very hard to proceed. Out of this emotional response from humans will come the perceived need for robot rights, though there will still be no more to robots than circuits and joints and sensors.

    I recently read an intriguing article about the uncanny valley, the suggestion that as robots approach human-likeness, there will come a point where we stop identifying them as machines and think of them as equal to ourselves, even though they have not changed fundamentally at all. That makes for a logical fallacy, a purely emotional response. If we see a robot rights crusade in our lifetimes, I will be firmly opposed to it, for this reason.

    Ha ha, just trying to see if I could write a comment as long as yours. ;-) But at the same time, I'm thankful you've equipped me with the Chinese Room thought experiment. I've never been able to explain this concept to other people (believe it or not, I've tried) but a good metaphor ought to drive the point home.

    March 22, 2007

  • Dave: We're closed.

    Scott: Hello? I want you to tell me where a shoe store is because I want to look for a pair of shoes and buy 'em.

    Dave: I'm sorry. I'd love to be of assistance to you but I'm afraid I speak no English.

    Scott: Pardon?

    Dave: Ah. I see by the expression on your face that you are confused by my statement. Perhaps you doubt its veracity, but let me assure you, I speak not a word of English.

    Scott: What are you talking about, huh?

    Dave: You see, everything that I am saying to you I have learned to speak phonetically. As to the meanings of the individual words or the percumbant rules of syntax, I haven't a clue.

    Scott: Why don't you just shut up and tell me where the shoe store is, you jerk?

    Dave: Allow me to reiterate, I speak no English. Perhaps this will wash the confusion from your face, my friend. My apparent fluency is the result of constant repetition. As you can imagine, I have been through this speech many times before, in fact ,I could repeat it for you in any one of seven different languages. Yet oddly enough , I've never learned to speak it in my own, which is fine since over the years I have forgotten how to speak my own language.

    Scott: Just shut up and tell me where the show store is, huh?

    Dave: Thank you, would you like to fight me now or are you a coward?

    Scott punches Dave in the stomach.

    Scott: Don't die.

    Dave: I don't know what you're saying.

    Scott: I just wanted to buy a pair of shoes, huh?

    Dave: No habla espanol, senor.

    Scott: Just got feet, don't got shoes.

    Dave: Nein sprechen sie deutsch.

    March 22, 2007

  • Interesting. Two weeks ago I had never heard of the Chinese Room; all of a sudden it's showing up everywhere. I'm currently reading Douglas Hofstadter's "Le Ton Beau de Marot" in which he comments on Searle's gedankenexperiment at some length. He is fairly dismissive, referring to it as a "bad meme", which might seem sensible on superficial inspection, but "the more carefully you scan it, the sillier it gets".

    Among the objections raised by Hofstadter: Searle is disingenuous to the point of being misleading about certain key aspects of the system, specifically how resource-intensive any program would need to be to attain the stated level of proficiency, and how glacial the time-scale would probably be. But the most telling objection is "the suggestion (not explicitly articulated yet clearly implied) that the rules allowing the system to answer questions at native-speaker level can be couched merely in terms of manipulation of the symbols of the language", without requiring an understanding of "abstract patterns transcending the signs of Chinese, or of any human language - patterns behind the scenes, having to do with how ideas and concepts and knowledge are represented in complex, overlapping associative networks in the human mind".

    If I'm understanding any of this, and I'm not sure that I do, the problem seems to be that there's a whole meta-level which Searle just tries to sweep under the carpet.

    Personally, I feel a little more affinity for the mathematicians' Chinese Box and Chinese Remainder theorems.

    March 22, 2007

  • 'But the most telling objection is "the suggestion (not explicitly articulated yet clearly implied) that the rules allowing the system to answer questions at native-speaker level can be couched merely in terms of manipulation of the symbols of the language", without requiring an understanding of "abstract patterns transcending the signs of Chinese, or of any human language - patterns behind the scenes, having to do with how ideas and concepts and knowledge are represented in complex, overlapping associative networks in the human mind".'

    Searle is silly and wrong, but this is not such a strong objection. "Complex, overlapping associative networks" aren't some magical boundary of requiring comprehension vs. not requiring comprehension; they're just a massive & massively complicated set of contingent rules.

    March 22, 2007

  • 'Searle is silly and wrong, but this is not such a strong objection. "Complex, overlapping associative networks" aren't some magical boundary of requiring comprehension vs. not requiring comprehension; they're just a massive & massively complicated set of contingent rules.'

    I don't think that Hofstadter intended to suggest that there is a clear boundary establishing the dichotomy you suggest, but I think he was arguing that the degree of complication of the 'massively complicated set of contingent rules' was so great that it represented a pretty strong objection.

    I've not yet finished "Le Ton Beau de Marot", but if I were to sum up what I've understood of Hofstadter's arguments thus far, he seems particularly interested in exploring those aspects of translation that are hard (impossible?) to capture completely in terms of a set of contingent rules, even when those rules are highly complex.

    March 23, 2007