Let's say some scientist develops an artificially intelligent program that, to a layman, can interact in a human-like manner and seems to be as intelligent as a human. To the scientist, the program is no more than a set of instructions in a text file that can be modified at will, compiled, copied, deleted, and so forth. The scientist does not perceive the program to be "alive" or even "intelligent" in any meaningful sense, because at any instant the program is simply responding to its inputs in the way the programmer has designed it to. To the layman the program may appear to be intelligent, personable, and self-aware, but to the scientist this appearance is just a facade.

Is there a substantial difference between the program and a human? Is the only difference that someone knows how the program works, but no one knows how a human works? If they behave the same, are they the same? Or is there more to humanity than appearance? If someone were capable of "programming" a biological human, would that put the human and the computer program on the same level?

How would a secular materialist morally distinguish between the program and a human? Is it cruel for the scientist to terminate the program every night before he goes home? Is it cruel for him to tweak the program and prod it to see how it responds to various stimuli? A self-aware program might very well be designed to feel anguish at the prospect of being turned off, but is that anguish "real" or morally substantial? If it is real, is it ok for the programmer to tweak the program so that it no longer feels such anguish? How is that different from a cult leader who convinces his followers that if they drink the kool-aid they'll be taken up to live on a comet?

If such a program can react in a human-like manner, can we use it as a torture simulator? Perhaps our interrogators need to experiment to find the most reliable methods for extracting truthful information from prisoners. We don't want to cause undue pain to actual humans, but perhaps we could torture a sufficiently human-like program and see how it reacts.

What's more, in order to improve this program we will certainly need to perform experiments on it to verify its operation and test its limits. We will need to duplicate it, put it into confrontational situations, create families and groups of human-like programs and watch them interact, and so forth. To the scientist who wrote the program, these permutations will probably be morally irrelevant, but the layman observing the process will likely encounter situations that appear to be quite troubling. Are they?

Comments

Supporters

Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Site Info

Support