I recently learned that the expression “It’s all Greek to me” derives from medieval, Latin-speaking philosophers bemoaning their inability to read ancient Greek texts. That made me wonder what the Greeks say; which, it turns out, is “It’s all Chinese me.” Before investigating what the Chinese say, however, I realized I’d have a deeper problem with whatever resource I might consult: it would be all English to me. And I don’t understand what understanding English amounts to.
To see why, a brief detour.
It’s easy to treat creatures around us as if they had minds like our own. Every pet owner believes that his beloved Fluffy has thoughts, desires, and feelings; we say things like “that ant hopes to get that bread” or “those weeds want to kill the lawn.” The hard question is whether such talk is literal truth or merely metaphor. It’s especially hard with respect to computers programmed to produce very human-like behaviors. Computers have gotten so sophisticated these days that it’s very easy to think a properly programmed computer could cross the line from merely seeming to have a mind to actually having one.
Here’s a reason to think it wouldn’t—and, at the same time, to question our own understanding of English.
Imagine a man locked in a room. Pieces of paper with strange marks come through a slot in the door; the man studies them, consults a rule book he has (in English), and then from some boxes assembles some new marks to return out the slot. The process repeats. He doesn’t understand these marks; he’s just mechanically following rules matching input marks with outputs. But unbeknownst to him the marks are actual Chinese characters. The people on the outside are native Chinese speakers who believe they are conversing, through writing, with another native speaker within.
Well, computers are like the man in the room: they’re purely mechanical devices which operate on electrical inputs to produce electrical outputs, all according to a program they follow mechanically. Just as the man with his rule book can perfectly simulate an ordinary conversation to outside observers, so too could a properly programmed computer. But just as the man does not actually understand any Chinese, neither does the computer understand what it is doing. Thus computers at best simulate mentality and cannot literally possess it.
This argument points to a crucial difference between computers and people, and thus gives us a reason to deny minds to computers while granting them to other people—but it also raises a difficult question. It assumes that there is more to “understanding” a language than simply being able to produce appropriate outputs given various inputs. After all, the man and computer both can do the latter but only the man allegedly displays the former. But what else is there? When you hear certain English sounds you know what other sounds are appropriate to produce in reply. You “genuinely understand” English. So what exactly is there to “understanding” beyond the ability to utter the appropriate responses?
That’s what is all Urdu to me.
Source: John Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences, 3, 1980, 417-458. Reprinted in John Perry and Michael Bratman, eds., Introduction to Philosophy: Classical and Contemporary Readings, 3rd Edition (Oxford, UK: Oxford University Press, 1999).