Pages: 1 2 [3]   Go Down
Print
Author Topic: When Science finally creates sentient robots/computers.... / LTN  (Read 2955 times)
flammableBen

« Reply #30 on: Tuesday, July 15, 2008, 23:08:58 »

Such things only make you stronger.
Logged
axs
naaarrrrrppppp

Offline Offline

Posts: 13469





Ignore
« Reply #31 on: Tuesday, July 15, 2008, 23:09:19 »

hmmmm.
Logged
tans
You spin me right round baby right round

Offline Offline

Posts: 25258





Ignore
« Reply #32 on: Tuesday, July 15, 2008, 23:19:49 »

evening all
Logged
flammableBen

« Reply #33 on: Tuesday, July 15, 2008, 23:22:14 »

On a more pleasant note, I just got around to watching cox's goal against fenerbanchethingy on youtube. I haven't got around to reading all the posts I missed and probably won't, but I'd noticed a few mentions of it and another mate kept telling me to watch it.

Not bad. 6/10.
Logged
Lumps

« Reply #34 on: Wednesday, July 16, 2008, 08:56:08 »

depends on what rules they have built in to them. but as a machine they should treated as such. Their emotion is simulated and not real.

Alan Turing had an answer to this question back in the 50's. He suggested a test where:

"Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y’—but, at least at the beginning of the game, does not know which of the other person and the machine is ‘X’—and at the end of the game says either ‘X is the person and Y is the machine’ or ‘X is the machine and Y is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine."

If a machine can repeatedly pass this kind of interrogation with no more than a 70% chance of being correctly identified then it can reasonably be thought to be intelligent.

It's called the Turing Test and pops up a lot in science fiction, (including that interview in Blade Runner where Harrison Ford realises that Sean Young is a replicant but fancies her so ignores it).

I think it's fair enough. If something acts in every way as if it is intelligent, to such an extent that we can't tell the difference between it and a naturally created intelligent being, then surely its only right to treat it in the same way we would anyone else.

After all we all treat other people as if they are intelligent sentient beings because we percieve that they act as if they are. We've no other way way of knowing.

That's possibly the nerdiest post I've ever made. Cheesy

And oh nice to have you back Benny I was starting to get worried about you.
Logged
flammableBen

« Reply #35 on: Wednesday, July 16, 2008, 09:20:14 »

Ahh the old Turing Test. I had to write some fairly interesting essays on such things a long time ago. The Chinese Room thought experiment is a nice bit of related craziness. I always liked to think of a schizophrenic mind within a mind, although I know that's not really the idea of it.
Logged
axs
naaarrrrrppppp

Offline Offline

Posts: 13469





Ignore
« Reply #36 on: Wednesday, July 16, 2008, 16:52:12 »

Alan Turing had an answer to this question back in the 50's. He suggested a test where:

"Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y’—but, at least at the beginning of the game, does not know which of the other person and the machine is ‘X’—and at the end of the game says either ‘X is the person and Y is the machine’ or ‘X is the machine and Y is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine."

If a machine can repeatedly pass this kind of interrogation with no more than a 70% chance of being correctly identified then it can reasonably be thought to be intelligent.

It's called the Turing Test and pops up a lot in science fiction, (including that interview in Blade Runner where Harrison Ford realises that Sean Young is a replicant but fancies her so ignores it).

I think it's fair enough. If something acts in every way as if it is intelligent, to such an extent that we can't tell the difference between it and a naturally created intelligent being, then surely its only right to treat it in the same way we would anyone else.

After all we all treat other people as if they are intelligent sentient beings because we percieve that they act as if they are. We've no other way way of knowing.

That's possibly the nerdiest post I've ever made. Cheesy

And oh nice to have you back Benny I was starting to get worried about you.

Interesting stuff, not heard of it before but then I'm not a sci-fi kinda guy.
Logged
Pages: 1 2 [3]   Go Up
Print
Jump to: