COMMENSAL ISSUE 103


The Newsletter of the Philosophical Discussion Group
Of British Mensa

Number 103 : October 2000

ARTICLES
21st August 2000 : Anthony Owens

DETERMINISM

I suppose I could reply to Roger Farnworth by saying that I see no point in replying to anyone who calls me a Nazi, but then with abortion, and human cloning for cannibalised spare parts, spotting Nazis is getting quite difficult these days. I could even accuse him of not having read my response in the light of his claim that there has been no 'argument to support free will'.

Instead, I hope he will permit me to re-phrase my questions. Let us imagine a hypothetical situation involving a human and an android. The details of the structure and capabilities of the android are unknown to the human. They each have the exclusive need of a piece of equipment in order to survive. The materials to make it are to hand but are only sufficient for one and there is no prospect of obtaining any more.

If the human makes it for the android would this demonstrate genuine compassion, or foolishness? If the android makes it for the human would this be an example of its compassion, or its programming? If the human's actions are determined, is there a difference between the human and the android? If there is, what is it, and how would you account for it? If there isn't, please return to my previous questions.

You might say that the android was made deliberately by a civilisation; and the human accidentally by evolution; but I see no essential distinction between your 'determined' human and the android.

Anthony Owens


Anthony : I don't want to interfere in the discussions between you and Roger. However, I'd like to raise a question about why you should think determinism or free will has anything to do with why we should do anything for another being. Maybe I misunderstand you, but what is the relevance of whether or not the recipient of the action (rather than the agent) has free will ? Why is it not the case that our actions should be governed by the object of our action's ability to suffer or enjoy the results of our action. Closely confined prisoners at our mercy do not have free will to this extent. Does it not thereby matter whether we do them ill or good ? If the android is so constructed that the quality of life it is capable of is superior to that of (at least some) human beings, then it might not be foolishness to act altruistically towards it, any more than it would be for an individual to sacrifice himself for another human being of superior qualities. The reasons why an android were to perform any specific action would depend on how it is programmed - or how it is programmed to learn. As far as we know, humans are born pre-programmed with generic capacities to enable them to live in communities - eg. to learn languages or to learn moral codes, though the precise forms of language or ethical system are dependent on the environment we grow up in. If an android is similarly programmed (as we might say evolution has programmed us) is there reason to believe there is an essential moral difference ? The clear distinction is, of course, that of quality. Will it ever be possible to write the programs to a sufficient degree of richness for the android to function to the same capacity as a human ? And finally, will it ever be possible for the android actually to be aware - actually to have the experiences that mean that we are not foolish to consider its interests ?

I gather that your article above is supposed to be a reductio ad absurdum. For the reasons just given, I doubt that this is the case.

Theo



Previous Article in Current Issue (Commensal 103)
Next Article in Current Issue (Commensal 103)
Index to Current Issue (Commensal 103)