Chinese Room thought experiment

Wilmer Digreux

Banned
Banned
Joined
Sep 17, 2023
Messages
2,248
Reaction score
4,167
Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.

The questions at issue are these: does the machine actually understand the conversation, or is it just simulating the ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just acting as if it has a mind?

Now suppose that Searle is in a room with an English version of the program, along with sufficient pencils, paper, erasers and filing cabinets. Chinese characters are slipped in under the door, he follows the program step-by-step, which eventually instructs him to slide other Chinese characters back out under the door. If the computer had passed the Turing test this way, it follows that Searle would do so as well, simply by running the program by hand.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, Searle would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word. Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind.





--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


The more i think about agi the more i tend to agree with this argument. i don't think machines will ever have a true mind or be able to actually think and have unprogrammed, conscious desires or goals. i think all the shit you hear from tech companies and media these days is smoke and mirrors and is designed to do 2 things.

- drum up gullible investors

- create a bit of fear in the public, which govts love to capitalize on



i don't think it will EVER be possible for humans to create true conscious, thinking beings out of inanimate objects. never going to happen. don't be a sucker.



What do you think?
 
Whenever I think of the possibility of a self-aware computer I think of a famous sci-fi short story. It's rather short, but conveys quite a punch. Here's the whole thing.
Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the sub-ether bore through the universe a dozen pictures of what he was doing.

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe – ninety-six billion planets – into the super-circuit that would connect them all into the one super-calculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment’s silence, he said, “Now, Dwar Ev.”

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.

Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.”

“Thank you,” said Dwar Reyn. “It shall be a question that no single cybernetics machine has been able to answer.”

He turned to face the machine. “Is there a God?”

The mighty voice answered without hesitation, without the clicking of single relay.

“Yes, now there is a God.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.
 
The more i think about agi the more i tend to agree with this argument. i don't think machines will ever have a true mind or be able to actually think and have unprogrammed, conscious desires or goals.
The real revelation won't be that we can create a computer with a real mind, it will be that the human mind is nothing more than a computer.

Once we truly understand the inputs and outputs, it will be easily replicated and surpassed.
 
No. Modern science has no answer for 'mind' and is just bumbling around trying to finds ways to smuggle metaphysics into a mechanistic worldview.
 
Dean Koontz's Frankenstein series touched on this idea. Good stuff.
 
Why wouldn’t it be possible? If natural selection can generate a conscious mind from inanimate molecules, humans might be able to do the same someday. Fundamentally, it’s all about the organization of atoms.
 
Back
Top