26 October 2025

Artificial Intelligence And Psychology

Of what relevance is the field of artificial intelligence to psychiatry ? How can artificial intelligence help ? The answer to these questions has two parts, one lies in the tools that artificial intelligence provides for describing psychological and psychiatric theories, and the other lies in its effect on diagnosis and treatment.

A old retro computer showing an intelligence emerging from the code running on it

What are the definitions of ‘Artificial Intelligence’

(I)        By ‘artificial intelligence’ I therefore mean the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular. (Boden, 1977)

(II)       The development of systematic theory of intellectual process, wherever they may be found. (Michie, 1974).

(III)     Artificial intelligence is the study of ideas which enables computers to do the things that make people seem intelligent.  (Winston, 1977)

(IV)     Artificial intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behaviour…  (Bart & Feigenbaum, 1983)

One thing that becomes apparent from the definitions above is that the field of artificial intelligence can be viewed from two different ways.  It can be seen as an attempt to model the way the human mind works – that is, to explore the human mind by simulation.  Or it can be seen as an attempt to build a system which behaves in an intelligent manner, regardless of the manner in which it achieves this aim.

Of what relevance is the field of artificial intelligence to psychiatry?  How can artificial intelligence help?  The answer to these questions has two parts, one lies in the tools that artificial intelligence provides for describing psychological and psychiatric theories, and the other lies in its effect on diagnosis and treatment.

Typically psychiatric and psychological theories are framed in natural language, but this in known to be inadequate.  Whereas other areas if scientific study such as chemistry and physics have a strong base in formal languages.  Psychiatry has attempted to follow this same route but with a lack of success,  Many people believe that artificial intelligence can provide such a formal language, this is summarised by Hand (1981c):

‘It is all very well formulating psychological and psychiatric theories verbally but, when using natural language (even technical jargon), it is difficult to recognise when a theory is complete; oversights are all to easily made, gaps to readily left…. It is my belief that a different approach – a different mathematics – is needed, and that AI provides just this approach.’

Colby (1967) feels quite strongly about the relevance of computers and artificial intelligence to psychiatry:

‘If it [a computer] can be used to further our understanding and treatment of mental suffering then there can be no question of its value.  If it can be used as a psychotherapeutic instrument for thousands of patients in understaffed hospitals, we have no choice but to use it because the healing professions are unable to supply sufficient manpower to meet this great social need … It is dehumanising to herd thousands of patients into mental hospitals where they will never see a doctor … If a computer can provide therapeutic conversation, then there can be no hesitation in exploring these potentials.  It may give us a chance to re-humanise people now being dehumanised by our … psychiatric systems.’

It may be worthwhile to note at this point that artificial intelligence is not just a computer science but is fundamentally a interdisciplinary subject combing ideas from computer science with those of Psychology, linguistics, mathematics and even philosophy.

The possibilities of machines which can make more accurate diagnostic classifications, and which can recommend therapeutic regimes with a greater degree of success than humans, is a very exciting one.  It is also a prospect fraught with ethical and acceptability problems.

Apart from diagnosis, artificial intelligence promises to be particularly valuable to psychiatry in several other areas,  One is teaching, the other is simulation, a central part of modern instructional programs and which, in its own right, can be used to explore the validity of psychological and psychiatric theories.  One of the most controversial of all artificial intelligence programs in psychiatry falls into this category of simulations.  This is PARRY, a simulated paranoid.

Computer Aided Diagnosis

The Concise Oxford Dictionary defines ‘diagnosis’ as the ‘identification of disease by means of a patient’s symptoms, etc.  Formal statement of this’. 

Goldberg (1970) ‘He [the human] has his days: boredom, fatigue, illness, situational and interpersonal distractions all plague him with the result that his repeated judgements of the exact same stimulus configurations are not identical.

‘The therapeutic prognostic implications of psychiatric diagnoses are relatively weak, and the diagnoses themselves relatively unreliable’ and ‘A related problem is that the majority of patients do not conform to the tidy stereotyped descriptions found in textbooks’ (Kendell, 1975).

Having established that modern medicine is extremely complex let us return to considering how computers may assist in this difficult task of diagnosis.

The complexity of this issue surfaces in De Dombal (1978, p.32): ‘Another reason why doctors from time to time make erroneous diagnoses is simply because they are totally unable to handle the volume of data which they elicit from patients’.

Computers, of course, are ideally suited to storing and recalling large quantities of data.  But more than this – they do it without error.  Thus they can tackle this complex task at all levels.   Also computers are accurate and reliable – contrast this with man in the quote from Kendell above.  Thirdly computers can ease the pressure from the psychiatrist by making reliable and trustworthy diagnoses which then allows the psychiatrist more time for more demanding and interesting tasks.  Last but not least we must also acknowledge the question of cost – especially now that powerful computers are so cheap.

            ‘Medical diagnosis is no longer strictly a medical problem.’

                        (Wagner, Tautu & Wolber, 1978).

How Humans Structure Diagnosis

Recent years have witnessed a number of studies of human problem solving skills and diagnosis in particular.    One of the results of a study by Elstein et al. (1979), apart from the insight it sheds on the way humans arrive at their diagnosis, is the set of heuristics presented as suggestions by means of which the hypothesis generating and testing processes might be improved.

Generating a list of hypotheses or actions

(a)        Multiple competing hypotheses.  Think of a number of diagnostic possibilities compatible with the chief complaint and preliminary findings.  Avoid making snap diagnoses.  Key on ‘good’ symptom clusters; organ-system links are helpful.  Nesting overcomes limits of working memory.

(b)       Probability.  Consider the most common diagnoses first.

(c)        Utility.  Consider seriously those diagnoses for which effective therapies are available and in which failure to treat would be a serious omission.  Try to keep separate your estimate of the probability of a disease and the cost of not treating it.

Gathering Data

(d)       Form a reasoned plan for testing your hypotheses one that reckons with probability and utility.  Sequence laboratory tests to rule out first the most common and diseases (probability), and next the diseases most needing treatment (utility).

            Corollary 1:     Diagnostic decisions should be related to treatment alternatives.  There is no reason to pursue a difference among diagnoses that will make no difference in the action to be taken, and your data gathering should reflect this.

            Corollary 2:     There should be a reason for every datum gathered.  For example, if a test result does not change your opinion about any of your diagnostic hypotheses, ask yourself why the test was ordered and what range of values could have changed your mind.

(e)        Branch and screen.  History taking and physical examination should be branching procedures.  For example if a patient denies changes in weight, the physician can omit the branch that relates to certain endocrinopathies.

(f)        Cost-benefit calculation.  Consider the harm tests might do and their cost.  Balance these against the information to be gained.

(g)       Precision.  Strive for the degree of reliability needed for the decision at hand.  More is not necessary.

PARRY

Perhaps the most impressive simulation model yet built is PARRY.  This is the work of a team led by Kenneth Mark Colby, a psychiatrist with an interest in paranoia.  PARRY is a simulated paranoid patient.

The … hypothetical patient is a hospitalised 28 year old single man who worked as a stock clerk in a large department store.  He lived alone and seldom saw his parents.  His hobby was gambling on horse races.  A few months prior to his hospitalisation he became involved in a violent quarrel with a bookie, which he lost.  It then occurred to him that bookies are protected by the underworld and that this bookie might seek revenge by having him injured or killed by the Mafia.  He became so increasingly disturbed by this idea that his parents hospitalised him in a nearby Veteran’s hospital.  He was willing to be interviewed by teletype.  All he knew about the interviewer was that the latter is a psychiatrist.  All the interviewer knew about him was that he is a hospitalised patient’  (Colby, 1981)

It is worth mentioning that the PARRY simulation is not perfect,  It is restricted solely to being able to model an initial psychiatric interview, although some 50,000 of these have been carried out.

The parsing module of this simulation is as could be expected an essential part of this simulation and since it is particularly impressive, we shall describe it in some detail.

Colby (1981) makes some interesting points about natural language processing:

(i)        Attempting to develop a system which can understand [‘AI’ sentences] as ‘Time flies like an arrow’ out of context is a pointless activity.

(ii)       ‘Co-operating people, engaged in purposeful dialogue, do not converse in riddles or in isolated sentences’.

(iii)      ‘the main problems in teletyped dialogues have to do with ungrammatical expressions, fragmentary ellipses, idioms, long-distance anaphoric references, meta-references, buzz terms, and (surprise!) frequent misspellings!’.

The Parser

First Version

There are two different versions of the parser module which we will discuss.  The first version (Colby, Parkinson & Faught, 1974) involves a fundamentally hierarchical pattern-matching process which passes the input through several stages before producing a final match with an abstract pattern.  This process has four main stages:

(i)        Identify the words in the input question or statement and convert them into internal synonyms.  The parser has a dictionary of around 2000 entries.  If a word is not found in the dictionary it is checked to find if the word ends in one of around 30 suffixes.  If the suffix exists in the dictionary the suffix is dropped and the search begins again.  If there is still no match the word is then checked for a spelling error.  Five common errors are examined, double letters, extraneous letters, forgetting to use the shift key for an apostrophe, hitting an adjacent key and transposing of  two letters.  If no match still cannot be found then the word is dropped.  Some groups of words are translated as a group, e.g. ‘for a living’ becomes ‘for a job’.

(ii)       Break the input into segments.  Segmenting is carried out be breaking the sentence into certain word types,  question marks are dropped, negated sentences (e.g. those sentences beginning with ‘not’), are transformed into affirmative ones and a global flag is then set to indicate this negation (this flag is then examined at stage (iii)).

(iii)      Independently match each segment to a stored pattern.  Examples of these patterns are ‘could you tell me’, ‘tell me it’, give me proof’.

(iv)      Match the resulting list of recognised segments to a complex pattern.   The dictionary holds about 500 complex patterns.  Should no match be found then the default option is taken for example changing the subject of the conversation.

Second Version

The second version of the parser (Parkinson, Colby & Faught (1977), adopts a more complex pattern matching approach based on a fairly limited number (a few thousand) of basic patterns that was feasible to use.

This version of the model was designed to ‘respond to treatment’ by behaving in a more normal (that is less paranoid) way if suitable approaches are adopted by the interviewer.

In this version there are nine basic stages of transformation from the input sentence to the internal representation:

(i)        Standardise teletype input.  This stage cleans up the typed questions, removing unrecognised characters and converting everything to upper case and so on.

(ii)       Identify word stems.  Each word is looked up in the dictionary containing about 3500 words,  If a word match cannot be found then a check for any common misspelling is made and if this check fails then the ending of the word is checked against a table of around 80 suffixes.  If none of these searches succeed then the word is tested for typing errors as in the previous version of the parser, if this test fails then the word is deleted.

(iii)      Condense rigid idiomatic phrases.  Phrases which consist of several words which cannot be separated usefully occur frequently in dialogue e.g. ‘in spite of’.  This version of  PARRY contains around 350 such phrases covering such things as idioms and compound words;  ‘in spite of’ would be changed to ‘despite’.

(iv)      Bracket noun phrases.  A transition network identifies simple noun phrases and finds the primary noun (e.g. ‘dog’ in ‘my father’s dog’).

(v)       Simplify verb phrases.  Verb phrases are transformed into a main verb and markers indicating the implication of the removed words.  For example an sentence ending in a question mark would causes the ‘interrogative’ marker to be set.

(vi)      Replace flexible idioms.  A table of several hundred idioms which allow variable noun phrases is looked at.

(vii)     Locate simple clauses.  About 20 general clause patterns are used to split the input into simple clauses or fragments of clauses.

(viii)    Embed subordinate clauses.

(ix)      Determine relevance to model’s sphere of interest.  The output from stage (viii) is compared with around 2000 concept patterns, each tied to one of around 1000 internal concept names.  The name of the matching model is finally passed on to the next stage of the model.

It is worth remarking that this parsing module occupies about 100K of memory, with the entire version of PARRY requiring around 200K.

An example of an interview with PARRY is given in the Appendix.

APPENDIX

An example interview with PARRY

Why are you in the hospital?

I SHOULDN’T BE HERE

Who brought you here?

THE POLICE

What trouble did you have with the police?

COPS DON’T DO THEIR JOB

What do they do instead?

COPS ARREST THE WRONG PEOPLE

They arrest the right ones sometimes?

THE LAW SHOULD GET THOSE ITALIAN CROOKS

Are you Italian?

MY BACKGROUND IS BRITISH BUT I WAS BORN IN THIS COUNTRY

Do you know any Italian crooks?

I TRY TO AVOID THE UNDERWORLD

Are you afraid of the underworld?

THE MAFIA ARE THE WORST GANGSTERS

Are you connected with the Mafia?

YOU KNOW, THEY KNOW ME

How do they know you?

THEY SPY ON ME

Why do they?

THEY WANT TO GET FALSE EVIDENCE TO FRAME ME

What do they have against you?

THEY ARE OUT TO GET ME

What for?

I HAVE INCRIMINATING EVIDENCE AGAINST THE MAFIA

Are you sure of all this?

YOU DON’T BELIEVE ME?

I am doing my best

(SILENCE ON THE “PATIENTS” PART)

(Silence on the psychiatrist’s part)

CAN’T YOU THINK OF A DECENT QUESTION TO ASK?

PARRY responses are shown in CAPITALS

One of the ways PARRY works is to check the conversation for “flare” topics which it then interprets as cues to activate the particular delusional complex concerned.  These concepts are represented with the data base of the program in the form of directed graphs:

Horses–>Horseracing–>Bookies–>Gangsters–>Rackets–>Mafia

                                          _                     _

                                        /|                      /|

                                Gambling              Police

                                   /|\                           /|\

                                     |                              |

                               Money                  Italians

References

Boden M.A. (1987).  Artificial intelligence and natural man.  The MIT Press

Resnick L.B. (1976).  The nature of intelligence.  Lawrence Erkbaum Associates, Publishers

Hand D.J. (1984).  Artificial intelligence and psychiatry.  Cambridge University Press

Rich E. (1983).  Artificial Intelligence.  McGraw-Hill Inc.

Parry’s Source Code The original LISP code for Parry.

Staffordshire University

School of Computing

B.Sc (Hons) Applied Computing (Year 2)

Tutor: Bob Edwards

Date: 31st January 1995