COMP217 – MINDS and MACHINES

 

Introduction

 

The aim of this course is to introduce students to some of the fundamental philosophical approaches to mind, and to relate them to discussions concerning the nature and possibility of intelligent machines.

 

Because philosophy is best learnt through discussion, reading and writing – and, of course, thinking during these activities – the format of the module will differ from the standard lecture and practical format.

 

Topics will be addressed through

  • A seminar discussion introducing the topic (Mondays at 11am). During this seminar questions for discussion will be introduced and a list of suggested readings given.
  • The students should read the suggested readings, consider the discussion questions, and prepare a one page position piece addressing the questions and identifying further questions for discussion.
  • A tutorial at which the questions will be discussed, the students drawing on their position piece and identified questions. (Fridays at 11am).

 

Position statements will not be assessed in weeks 1 and 12. The other ten position statements will be assessed, each contributing 5% of the mark for the module. The remaining 50% of the marks for the module will be on the basis of an examination. The examination will be a two hour exam comprising a number of essay questions, of which the students will be required to answer two.

 

The main text for 2006-7 will be E.J. Lowe, An Introduction of the Philosophy of Mind, Cambridge University Press, 2000. Other readings will be, where possible, available on the WWW, or in handouts.


 

Links to General Sources

 

There are a number of good general dictionaries and encyclopaedias of Philosophy and Philosophy of Mind on the internet. A selection of links are:

 

Stanford Encyclopaedia of Philosophy

http://plato.stanford.edu/

 

Routledge Encyclopedia Philospohphy of Mind

http://www.rep.routledge.com/?authstatuscode=200

 

Dictionary of the Philosophy of Mind

http://artsci.wustl.edu/~philos/MindDict/dictindex.html

 

Wikipedia

http://en.wikipedia.org/wiki/Philosophy_of_mind

 

 

Field Guide

http://host.uniroma3.it/progetti/kant/field/

 

SWIF Philosophy of Mind

http://lgxserver.uniba.it/lei/mind/index.htm

 

Garth Kemerling’s Philosophy pages

http://www.philosophypages.com/

 


 

Week One

 

Minds and Bodies, People and Machines

 

Philosophy of Mind addresses the nature of mind and its relation to matter. But what is mind?, what kinds of things have minds? Are minds things than can be “had”?

 

In this first week we shall explore what we mean by “mind” and its relation to other concepts such as “intelligence”. Given the focus of our course we will give some preliminary consideration to the notion of mind in relation to machines.

 

Questions for discussion:

 

What is the relation between “mind” and “Intelligence” ? What does it mean to say something “has a mind”? Which of the following can “have minds”: people, spirits, children, animals, insects, plants, cars, computers, stones?

 

Reading:

 

1        E.J. Lowe An Introduction of the Philosophy of Mind, Cambridge University Press, 2000.  Chapter 1

2        http://en.wikipedia.org/wiki/Mind .

3        For a classic discussion of mind, see Rene Descartes Meditations, Mediation 1 and, especially Meditation 2.

 http://oregonstate.edu/instruct/phl302/texts/descartes/meditations/Meditation2.html

 

 

 


 

Week Two

 

The Argument From Illusion

 

The argument from illusion is intended to place a gap between mental phenomena and the external world. It can be used as the basis of scepticism about the external world , or of solipsism (scepticism about the external world including other minds). Reactions to the argument can determine positions in philosophy of mind.

 

One formulation can be found in Descarte’s Meditations:

 

“But, afterward, a wide experience by degrees sapped the faith I had reposed in my senses; for I frequently observed that towers, which at a distance seemed round, appeared square, when more closely viewed, and that colossal figures, raised on the summits of these towers, looked like small statues, when viewed from the bottom of them; and, in other instances without number, I also discovered error in judgments founded on the external senses; and not only in those founded on the external, but even in those that rested on the internal senses; for is there aught more internal than pain? And yet I have sometimes been informed by parties whose arm or leg had been amputated, that they still occasionally seemed to feel pain in that part of the body which they had lost, --a circumstance that led me to think that I could not be quite certain even that any one of my members was affected when I felt pain in it. And to these grounds of doubt I shortly afterward also added two others of very wide generality: the first of them was that I believed I never perceived anything when awake which I could not occasionally think I also perceived when asleep, and as I do not believe that the ideas I seem to perceive in my sleep proceed from objects external to me, I did not any more observe any ground for believing this of such as I seem to perceive when awake; the second was that since I was as yet ignorant of the author of my being or at least supposed myself to be so, I saw nothing to prevent my having been so constituted by nature as that I should be deceived even in matters that appeared to me to possess the greatest truth. And, with respect to the grounds on which I had before been persuaded of the existence of sensible objects, I had no great difficulty in finding suitable answers to them; for as nature seemed to incline me to many things from which reason made me averse, I thought that I ought not to confide much in its teachings. And although the perceptions !of the senses were not dependent on my will, I did not think that I ought on that ground to conclude that they proceeded from things different from myself, since perhaps there might be found in me some faculty, though hitherto unknown to me, which produced them.” Meditation 6:7


 

A more explicit formulation is:

 

(1) S’pose you are hallucinating a pink rat.

(2) Then you must be seeing something.

(3) But what you see corresponds to no external material object.

(4) Rather, it must be an internal, immaterial object. (a "sense datum")

(5) But your experience is the same as it would be, if you were really looking at a pink rat.

(6) So what it is that you see is the same in each case.

(7) So what it is when you are really looking at something (e.g., a pink rat), all you ever really see are immaterial sense data.

Dave Beisecker, University of Nevada Phi101
http://www.unlv.edu/faculty/beisecker/Courses/Phi-101/Phi101.html

 

Questions for Discussion: How convincing is the argument from illusion? If it is not convincing, what is wrong with it? Does it make a difference if we are looking at an illusion, a hallucination or simply a variable perception? Does it apply to computers? If so, how, and if not, why not?

Reading:

 

  1. E.J. Lowe An Introduction of the Philosophy of Mind, Cambridge University Press, 2000.  pp102-114
  2. The general sources listed above e.g. http://plato.stanford.edu/entries/sense-data/#ArgForSenDat .
  3. Try to find other formulations of the argument e.g. A.J.Ayer’s most famous presentation (The Foundations of Empirical Knowledge, (London: Macmillan, 1940).
  4. A.J.Brokes The Argument From Illusion Reconsidered. Disputatio 9 (2000)

http://www.disputatio.com/articls.html


 


 

 

Week Three

 

Dualism

 

Dualism – the idea that mind and matter are distinct substances – is a long standing idea in the philosophy of mind. It is associated with Descartes, and Cartesian Dualism is a common form, although there are other varieties of dualism. A number of arguments have been proposed against it, but in many ways it remains an influential conception of mind.

 

Questions for Discussion

 

What are the arguments for Cartesian dualism? What other dualist positions are possible? What are the arguments against dualism? How could matter and mind interact? Does dualism exclude the possibility of thinking machines?

 

Reading

 

1. Lowe pp8 -38.

2. The General Sources e.g. Stanford Encyclopaedia of Philosophy article on Dualism

3. Thomas Nagel, “What is it like to be a bat?” http://members.aol.com/NeoNoetics/Nagel_Bat.html

4. http://www.philosophyonline.co.uk/pom/pom_non_cartesian_dualism.htm

 

 

 


Week Four

Materialist Theories of Mind

If we reject dualism we need to offer an account of how we attribute mental predicates to people, and how we account for the subjective experience of mental states. A number of theories have been advanced, including behaviourism, functionalism and identity theories. This week we will look at these approaches and consider whether they can adequately account for mental events.

Questions for Discussion

Can any of the materialist theories account for what we want to say about mental states? Can they account for our experience of  mental sates? Is any of the materialist approaches more appealing than the others? Does adopting a materialist theory commit us to the possibility that computers have mental states?

Reading

  1. Lowe pp32-68
  2. On Behaviorism http://www.philosophyonline.co.uk/pom/pom_behaviourism_forms.htm
  3. On Functionalism http://plato.stanford.edu/entries/functionalism/ and http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/functionalism.html
  4. On Identity Theories http://plato.stanford.edu/entries/mind-identity/
  5. Ryle, The Concept of Mind, Hutchinson (1949) is a classic piece of 20th century philosophy attacking dualist notions.  The book is well worth looking at, but there is also a summary at http://www.angelfire.com/md2/timewarp/ryle.html

 


 

Week Five

Perception

In week 2 we looked at the argument from illusion, which was supposed to cast doubt on the reliability of our perceptions. This week we will consider a number of answers to the problem of perception, including causal, intentionalist and disjunctive theories. But, although seeing is often used as the paradigm of perceiving, we can perceive things using our other senses, and even without using senses at all, as when we perceive the truth of an idea. Is there more to perception than sensing? Do notions such as judgement and belief also play a role in perception?

Questions for Discussion

What is the problem of perception, and how well do the various theories of perception address it? When I see a mouse and hear a mouse do I perceive the same thing? When I perceive that a table is square does it matter whether I do so by sight or touch (or measurement?)? Are mental perceptions, such as the truth of an idea “real” perceptions, or this only an analogy? What is the role of judgement in perception? Does perception necessarily result in beliefs?

Reading

  1. Lowe, Chapter 6.
  2. Crane’s article in Stanford Encyclopaedia http://plato.stanford.edu/entries/perception-problem/
  3. Wittgenstein, Philosophical Investigations pp193-200 (handout).
  4. An article from AI and Society by Tilgman
    http://www.springerlink.com/content/t0w8587652n5v78h/
  5. An interesting article by Douglas Hofstadter can be found at http://www.stanford.edu/group/SHR/4-2/text/hofstadter.html 

 

Week Six

Thought and Language

Among mental contents we generally assume there to be “things” called thoughts, and thinking seems to be the paradigmatic activity of an intelligent being. Thoughts seem to involve the combination of concepts, and a mental attitude (belief, doubt, hope etc) directed towards this combination. Descriptions of such thoughts are generally in sentences expressing propositions, but it is not plausible to identify propositions expressed in natural language with thoughts. So what is the relation between thought an language? One hypothesis is that there is a “language of thought” that represents in much the same way as language, that is, it is a symbolic system realized in the brains of relevant organisms. It has, however, been argued that not all thought is linguistic: for  example, perhaps a craftsman thinks with his hands. Sometimes thoughts are accompanied by mental imagery: thinking of someone often seems to involve calling a picture of them to mind. Perhaps such images are more important for thinking than the linguistic expression of thoughts. There are also other questions relating to the ways in which language can constrain what we can think: if we can only think with concepts we have in language, then different linguistic communities will have different capacities for thought.

 

Questions for Discussion

Do thoughts represent states of affairs, and if so how? What is the difference between believing that something will happen, and hoping that it will? What is the role of mental imagery in thinking? What is the role of language in thinking? Is the language of thought (“Mentalese”) hypothesis plausible? If so, does it support the idea of Artificial Intelligence? Is it essential for AI?

Reading

  1. Lowe, Chapter 7
  2. Wittgenstein, Philosophical Investigations pp104-109.
  3. Stanford Encyclopedia  on Mentalese http://plato.stanford.edu/entries/language-thought/#What
  4. Stanford Encyclopedia on Belief          http://plato.stanford.edu/entries/belief/#2
  5. Stanford Encyclopedia on Reference http://plato.stanford.edu/entries/reference/

 

 

 


 

Week Seven

Action, Intention and Will

The ability to act is important part of what it is to be an intelligent being: it is purposive behaviour that we attempt to explain using metal concepts. But what makes an event an action? There is a difference between a person’s actions, and things that happen to them, but we need to characterize this difference precisely. Next we need to distinguish things that people do, from things that merely happen as a result of what they do. We also want to distinguish intentional and voluntary actions from unintentional and involuntary actions. We also need to relate actions to desires, beliefs and volitions.

Questions for Discussion

When can an event be said to be an action? What makes an action intentional, and what makes an action voluntary? Are reasons for action different from the causes of actions? Some modern computer systems, called autonomous agent systems, are based on the Belief-Desire-Intention (BDI) model. What do we want to say about the actions performed by such systems?

Reading

1.      Lowe, Chapter 9.

2.      Mike Wooldridge’s Agents Slides http://www.csc.liv.ac.uk/~mjw/pubs/imas/distrib/pdf-index.htm especially lectures 2 and 4.

3.      Actions, Reasons, and Causes Donald Davidson The Journal of Philosophy, Vol. 60, No. 23, American Philosophical Association, Eastern Division, Sixtieth Annual Meeting. (Nov. 7, 1963), pp. 685-700. http://www.jstor.org/view/0022362x/di972820/97p0027b/0

4.      The on line sources: try “action” “practical reason” and “free will”.

 


Week Eight

Consciousness

Consciousness is at once familiar and puzzling. We are constantly aware of “ourselves” as things with a past, present and future, and this conception of a persistent self unifies our experiences into a coherent pattern and has an enormous influence of what we think and do. We are also inclined to think of ourselves as in some way separate from the world. Moreover we see (some) others as being similarly persistent things. Questions about consciousness include:

·        simple description: what are its features (for ourselves and for other conscious beings)?

·        explanation: how can consciousness (come to) exist?

·        function: what is the role of consciousness features (for ourselves and for other conscious beings)?

There are a number of theories of consciousness, both dualist and material, but no real consensus on these issues.

Questions for Discussion

Are the problems associated with consciousness different for first person as opposed to third person attributions of consciousness? Can any of the theories of consciousness satisfactorily account for both first and third person phenomena? Can any account for either satisfactorily? Is there a satisfactory solution to the problem of other minds? How do the problems of consciousness and other minds relate to the notion of intelligent machines?

Reading

1. Lowe doesn’t say very much about consciousness explicitly, but there is an excellent and thorough article in the Stanford on-line Encyclopaedia:  http://plato.stanford.edu/entries/consciousness/

2. An article by Alex Byrne. http://www.bostonreview.net/BR31.3/byrne.html

3. There is also an interesting article in the Routledge on-line Encylopaedia: http://www.rep.routledge.com/article/W011?ssid=53827009&n=1#

4. The Stanford entry on Other Minds http://plato.stanford.edu/entries/other-minds/#3.2

 

 


Week Nine

The Turing Test

The Turing Test was proposed by Alan Turing to give a procedural meaning to the question “Can Machines Think?” The basic idea is that if a machine and a human being answer questions from an interrogator and the interrogator cannot tell which is the machine and which is the human, the machine will pass the test.

The sort of questions and answers envisaged by Turing are

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764

A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.

There have been some attempts to build programs to pass the test. Among the earliest and best known is the Eliza program of Joseph Weizenbaum, intended to mimic an emphatic psychologist (although Weizenbaum firmly denied that Eliza was in any way thinking). This program made use of a number of much imitated techniques. Another program PARRY mimicked a paranoid. There is still a competition for machines: a recent winner is Joan.

Questions for Discussion

Is the Turing test fair? If it unfair, is it unfair to the human or the machine? Current programs seem little better than Eliza: why was initial progress towards satisfying the test so good, and subsequent progress so slow? What would passing the test prove? Machines have developed a long way since Turing’s time: can the test be updated? Would this provide convincing evidence of a thinking machine? Is there an alternative to the Turing test which would establish that a machine was thinking?

 

 

 

Reading

1.      A.M. Turing, Computing Machinery and Intelligence, Mind, 1950, vol. 59, no.236, pp. 433 – 460. Text and a commentary can be found at http://www.abelard.org/turpap/turpap.htm

2.      Joseph Weizenbaum, ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM Volume 9, Number 1 (January 1966). http://i5.nyu.edu/~mm64/x52.9265/january1966.html

3.      Dialogues with Eliza, Parry and Racter
http://www.stanford.edu/group/SHR/4-2/text/dialogues.html

4.      Joan http://www.guardian.co.uk/comment/story/0,,1879142,00.html

5.      Lowe Chapter 8, especially 209ff

 

 


 

Week Ten

Searle and the Chinese Room

Searle in his 1980 paper Minds, Brains, and Programs, sets out to prove that Artificial Intelligence is not possible. His strategy is to se t up a situation in which all the aspirations of AI are satisfied, but in which it is clear that no intelligence is involved. Briefly stated, is scenario is a man inside a room who is given written questions in Chinese. He has a manual which leads him to given written answers in Chinese which are considered appropriate by the questioners. But neither he, nor the “room” understands Chinese: everything is mechanical, no intelligence is required to produce the answers. The article produced a great deal of controversy, and a number of leading workers in AI gave their replies. It remains to this day the best known argument against the possibility of AI.

Questions for Discussion

Does Searle’s example give a fair picture of what people in AI are trying to produce? Does it apply to today’s systems as well as those of 1980? Can we think of systems (current or future) to which his argument would not apply? Are any of the replies to Searle convincing, or does he meet all the objections? If not, should we abandon AI?

Reading

1.      Searle’s original article "Minds, Brains, and Programs," by John R. Searle, from The Behavioral and Brain Sciences, vol. 3 can be found at http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html

2.      The peer response to Searle are also in volume 3 of The Behavioral and Brain Sciences. These are not on line, but the volume is in the Sidney Jones Library at QP351.B2 . I have distributed a  photocopy.

3.      Internet Encyclopaedia of Philosophy http://www.iep.utm.edu/c/chineser.htm

4.      Lowe ch8, especially 214ff


 

Weeks Eleven and Twelve

The Intentional Stance: Can Machines Think?

Daniel Dennett argues that when explaining or predicting behaviour we can adopt one three stances: the design stance, the physical stance and the intentional stance. Which is best to adopt depends on which works best. For a simple program the design stance is good, but for a malfunctioning system we need the physical stance (“it won’t work if it is not switched on). But for complex systems (e.g. chess playing systems) the intentional stance is the most useful. We don’t worry about “real” beliefs and desires, we simply ascribe them because they are useful. The intentional stance requires us to presume rationality. The intentional stance is widely used in common sense reasoning, but scientific can move us from the intentional stance with the design stance.

"Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do." (Daniel Dennett, The Intentional Stance, p. 17)

Questions for Discussion

Is Dennett’s description of the three stances convincing? Do we use them? Are there implications for philosophy of mind: is any particular theory of mind in tune with the intentional stance? Could we apply the intentional stance to ourselves? How well does the intentional stance explain human behaviour? Are there other stances that we can use? How does Dennett’s idea relateto the question of whether machines can think? Can they? Do they?

Reading

1.      Chapter 1 of  Brainstorms (handout).

2.      A Dialogue on the Web http://www.consciousentities.com/dennett.htm

3.      Wikipedia http://en.wikipedia.org/wiki/Intentional_stance

 

 


 

 

 

 

1. Introduction and overview: Minds and Bodies, People and Machines

2. Scepticism and the Argument from Illusion

3. Dualism

4. Behaviorism, Functionalism and Identity Theories

5. Perception

6. Thought and Language

7. Consciousness

8. Action, intention and will

9. Machine Intelligence and the Turing Test

10. Searle and The Chinese Room argument

11 and 12. Dennet and the Intentional Stance: Can Machines Think?