Mechanical Aids to Computation and the Development of Algorithms

1. Current Concerns in Computer Science

1.1. Introduction

One of the more hackneyed clichés touted around at present is that "nowadays, computers are everywhere". It is widely believed that computers influence significant aspects of our lives; computers are seen, by some, as a social threat; by others, as a panacea for social problems; and, frequently, computers are accepted as a valid scapegoat for human incompetence. Yet, as little as 30 years ago, such views would not have been found outside the pages of science-fiction stories. It is clearly the case that, in order for these perceptions to have gained ground, a tremendous increase in the public awareness of computers must have occurred. In this, introductory, lecture we shall examine what the principal current concerns in computer science are and discuss how these have contributed to popular misconceptions about the r&ole and functionality of computational systems. In particular we will address the following set of questions:

2 Misconceptions concerning computational systems

The views described above can be reduced to two assumptions:

(A) accounts for the perception of computer potential as threatening or beneficial; (B) for the proffering of computers as causes of errors.

In order to refute (B) it is enough to observe that a computer is, merely, a tool that is used under human control to specific ends. To reason that the quadrupling of a pensioner's electricity bill was caused as a result of `the computer making a mistake' is no more logical than propounding `the engine made a mistake' as a defence to a motoring offence. Thus, the only individuals who are `responsible' for the behaviour of machines are their users and designers.^1

1) There are numerous examples of the (often wilful) failure to understand this obvious fact: the excuses made by television companies who failed to come close to predicting the outcome of the last two General Elections; the near nuclear accident at 3-Mile Island; the actual nuclear disaster at Chernobyl.

The assertion `computers can do anything' is rather more difficult to deal with. As a precisely formulated mathematical abstraction, however, it has been known to be false since 1936.^2

2) vide `On computable numbers with an application to the Entscheidungsproblem' (Proc. London Mathl. Soc., Series 2 (42), pp. 230-265 (1936); corrections: ibid, (43) pp. 544-546 (1937)). An achievement of the great English mathematician and computer pioneer Alan M. Turing (1913-54) whose contribution will be discussed later in this course.

It is the case, however, that this technical refutation of the claim `computers can do anything' is not universally accepted: Turing's premises are questioned, even by some computer scientists. Nevertheless, underlying Turing's proof is the appreciation that all any computational system does is to manipulate finite sequences of symbols, such manipulations being carried out in accordance with a given program^3 of instructions.

3) The U.S. spelling `program' (as opposed to the British `programme') has become a standard usage when the word is used in the particular sense implied here. Given this, rather stark, description of the capabilities of computational mechanisms, it should seem immediately apparent that the claim `computers can do anything' is fallacious. There are those who would dispute this, however, and we will return to this issue at the end of the lecture.

1.2. Cultural and social causes of misconceptions concerning computers

One of the most noticeable aspects of how important scientific developments become widely known is in their (frequently misleading) portrayal in popular culture. As examples one may cite the r&oles played by: electrical power in Mary Shelley's Frankenstein; new psychological theories of personality in Stevenson's Dr. Jekyll and Mr. Hyde; the exploitation of Freud's psychoanalytic theories by Surrealist art; the effects of nuclear radiation and atomic power in films such as Them! and The Incredible Shrinking Man; etc. Progress in Computer Science has been no exception to this phenomenon, even though much of its history is concerned with the development of artefacts rather than abstractions. Of course the association of anthropomorphic attributes with machines and the view of these as threatening forces are very old ideas, cf. the Luddite riots in the 18th century. In this century the same theme may be found in works ranging from early instances such as Duchamp's great sculpture La mariée mise à nu par ses célibitaires, m&eme (Le grand verre)^4

4) Although gynomorphic would be a more appropriate description of this. through musical compositions like Varèse' Déserts to its specific computer representations in such films as Russell's Billion Dollar Brain^5, Tarkovsky's Solaris, Kubrick's 2001, etc.

5) A particularly laughable misrepresentation of computers: the eponymous machine is capable of analysing material of enormous complexity in order to make forecasts concerning military strategy; it can produce output in graphical, spoken and printed form, but throughout the film information is supplied to it in the form of paper tape and punched cards, techniques that had already been improved upon by the time the film was made.

Thus in the specific case of computer development the popular dissemination of the capabilities of these machines was, from the 1950s onward through the media of cinematic fiction and less than accurate newsreel and journalistic coverage. This, of course, during a period (post-war to mid 1960s) when a general appreciation of new technology and ideas would largely be acquired through cinemas rather than through television.

Despite the fact that the image of machines as threatening/beneficial has long been fashionable it is not one that can be seriously sustained without some underlying foundation - no matter how weak such a foundation may be in reality, e.g. the Luddite reaction to new technology, mirrors the real threat that individuals felt concerning their employment and welfare. If we examine what this foundation is in the case of progress in computer science then some understanding of how it has been distorted may be gained.

When we observe an individual performing, with great proficiency, a task that is regarded as intellectually demanding we have a natural tendency to view such a person as possessing above average `intelligence'. This is especially the case when the activity at which they excel is one which we, ourselves, find beyond our abilities. Thus, a gifted writer or artist, a successful entrepreneur, a prominent scientist or mathematician, a persuasive orator, or a skilled chess player, may often be regarded as people of `great intelligence', despite whatever evidence they may exhibit to the contrary. This tendency accounts in part for certain misconceptions that are widely held about computers and, unfortunately, is one which too many computer scientists have encouraged and too few have tried to correct.

Traditional applications of computers have included numerical calculations, deciphering codes, storage and processing of recorded information, and recently decision-support and advice giving. Consider the first of these. Involved arithmetic calculation is an activity that many, if not most, people find difficult to carry out, particularly if it is to be done solely in one's head. Computers, however, can be programmed to perform such calculations extremely quickly and in a manner which always gives the correct answer. Similarly deciphering encrypted text (the process of translating a piece of text given in an encoded form back to its original plain text format) is something which requires a skilled human agent but which can be accomplished, fairly easily, by appropriate computer systems.^6

6) One of the first British computers - COLOSSUS - was constructed during the Second World War to assist with precisely this task of decoding intercepted messages. Now both of these activities - arithmetic calculation and code decryption - are ones which would be regarded as indicating `intelligence' in an individual who performs them `well'. Hence we have one source of fallacies concerning computational mechanisms: since computers can (be programmed to) perform extremely well tasks that are normally found difficult, e.g. arithmetic, and since a human with some competence in such tasks would be considered `intelligent', it follows that computers are `intelligent' and therefore could solve other problems which we find hard to deal with. This is a rather casual logic but one which was (is, even) widely accepted.^7.

7) For which fact newsreel coverage of early computer systems bears some responsibility: a famous newsreel of the Whirlwind computer, built at M.I.T in the late 1940s, involves an over-excited reporter awe-stricken by the machine's 10 second calculation of a table of square roots and enthusing over the potential other uses of this machine.

One might argue, however, that there are many examples of mechanical devices that perform tasks well beyond the ability of a human agent - e.g. compact disc players, television sets, etc - and no (sane) individual would regard such as `intelligent'; why, then, should computers be considered differently? Such an argument is, of course, perfectly sound, there is no reason: the misconception that computers are capable of `intelligent' action arises not only from their facility at tasks such as arithmetic but also from the fact that they can be configured (i.e. programmed) to carry out other, unrelated, tasks as well. In addition a computer executing a program appears, in some sense, to be independent of human control.^8

8) This (illusory) appearance of detachment is used to considerable effect in at least three films: 2001, The Forbin Project, and the execrable Demon Seed.

The reasoning "computers can perform well some tasks that require `intelligence' and thus may be able to carry out other functions needing a similar ability" was one of the historical motivations behind a study that, at present, forms one of the principal fields in Computer Science: the area of Artificial Intelligence (A.I.). We shall return to this topic when discussing current issues in Computer Science in the concluding section of these notes. For the moment, it should be noted that the misrepresentation and exaggerated claims concerning the potential of A.I. systems has been, to a significant degree, a factor in the view of computers as threatening/beneficial objects.

Aside from the distorted perception of A.I., it is, undoubtedly, the application of computer systems as record maintenance tools that has contributed most to public disquiet about computers. Here is a selection of areas in which personal records are stored, processed and maintained on computer databases: Personal taxation records (by the Inland Revenue); student exam marks, course registrations, and other details (by the University of Liverpool, among others); Vehicle Licencing records (by the D.V.L.C. at Swansea); Social Security information (by the D.S.S. at Newcastle); car theft and motoring offences (by the Police National Computer); credit status and personal financial records (by banks, building societies, credit card agencies); employee salary details and other personal information (by most large employers).

It can be seen that data processing and handling is one of the areas of computer activity which affects almost everybody. The last 20-25 years have seen, in this sphere, an enormous increase in the number of applications which are dealt with by computer-based record systems rather than by the more traditional paper filing techniques. This expansion in the amount of confidential and personal information held on computer systems has been viewed with trepidation by many individuals, largely on account of the following reasons:

Some indication of the seriousness with which the last two points have been treated may be discerned in actions taken by Parliament within the last 10 years. In 1984 the Data Protection Act became law. This created the post of Data Protection Registrar by whom all instances of computer recording of personal information had to approved^9.

9) This provision did not, in fact, apply universally: certain classes of system were exempted, specifically those dealing with `national security' issues.

The Act also gave individuals the power to inspect computer record systems where they had reason to believe information relating to them was held^10 and to have such information corrected if it was in error.

10) The reason why the University of Liverpool now informs students of the actual marks they obtained on exam papers is in order to comply with the provisions of the Data Protection Act (although the Act only requires that such marks be released on request; voluntary disclosure is not, in fact, a legal requirement). Ten years ago such action would have been considerable unthinkable in many quarters.

The other legislative action occurred in 1990 when The Computer Misuse Act, a Private Member's bill sponsored by the Conservative M.P., Emma Nicholson, became law. This criminalised the activity of obtaining unauthorised access to computer systems - so called `hacking'. Regrettably its success has been rather mixed: the only prosecutions brought to date have resulted in one highly publicised acquittal and custodial sentences for two individuals who pleaded guilty at trial.

In summary, we can see that two of the main causes of the public awareness of computer development have been the popular reporting of A.I. potential - thus the fallacious extrapolation from facility at arithmetic to capability for intelligent independent action - and the growth of computer information storage systems with the attendant problems these create. We conclude this sub-section by, briefly, examining a few other areas in which computer technology has become influential and the effect of these developments on the general perception of computers.

Some applications which have noticeably been affected by technological developments are the areas of: process control; graphical displays; and transaction systems.

Process control concerns the use of computer systems to regulate activities such as the running of processes ranging from chemical production plants and nuclear power stations to washing machines and video recorders. In these the fact that computer mechanisms are being applied is often not obvious, part of the reason for this is the increased sophistication of the devices that carry out the control actions so that the physical space occupied by the controller is very small.

Similar technological progress accounts for the advances and wider applications of computer graphics. Unfortunately much of the exploitation of this capability has been in somewhat trivial areas: special effects in films, arcade and computer games. The approbation the latter have attracted is an interesting sociological phenomenon, viz. reports about children addicted to such games, claims that these may trigger epileptic fits or cause psychological illnesses; assertions that they encourage sociopathic behaviour; and the belief that these games deprive children of time that could be spent in more `worthwhile' pursuits^11.

11) The last is a recurring objection to new entertainment media: contemporary sources may be found inveighing against comics and cheap thrillers, gramophone records, cinema, radio, television, etc. Undoubtedly the 15th century invention of the printing press attracted similar approbation from those claiming to be solicitous of children's well-being.

Finally, transaction handling via computer systems has increased greatly since the early 1970s. Common examples of such systems are: automatic cash dispensers used by banks and building societies; airline booking and other seat reservation applications; library information services, e.g. the LIBIS system used at Liverpool University. In the first area - cash dispensers - some concerns have been expressed about security, specifically the occurrence of `phantom withdrawals' from accounts. At present there has been no resolution of the dispute between banks (who maintain that all questioned transactions occur because customers have released their identification code to a third party) and consumer organisations (who claim that unauthorised withdrawals from accounts are possible without such a code being disclosed). It seems probable, however, that this argument will be resolved in favour of the consumer interests.

1.3. Current Scientific Concerns in Computer Science

In the preceding sub-section we discussed those developments in computer technology, over the last quarter of a century, that have had a significant impact on the public awareness of computers. We conclude this opening section by examining a few of the research areas which are currently of importance in the development of Computer Science as a scientific discipline. The topics discussed, briefly, below are not intended to provide an exhaustive survey, but merely to give an overview of some areas which are being pursued.

We shall concentrate on three general areas in which there have been significant interest over the last few years:

1.4. Artificial Intelligence (A.I.)

We have already mentioned this area in connection with misconceptions about computer potential. There are a number of competing factions working in this field who disagree over what the exact aims of A.I. should be. It is to be regretted that the strength of this disagreement is such that opposing camps often comment on ideas whose validity they dispute, in a style which might be considered inappropriate to the conduct of a rational, scientific discourse. The two predominant opinions may be summarised as:

The latter approach has, without question, been the more successful. One of its most important contributions has been the development of, what are called, Expert Systems. Such systems are attempts to replicate the advisory and decision-making processes used by experts in specialist domains of knowledge. Fields in which successful systems have been built include: medical diagnosis (for particular classes of ailment); legal systems (for interpreting specific items of legislation); and mineral prospecting. Research into extending the range and developing the capabilities of such systems is, at present, one of the central areas of computer science.

The alternative school has concentrated on areas such as understanding natural language, and philosophical debate about the nature of what constitutes an `intelligent machine'. In recent years there has been a growing interaction between this approach and work carried out in cognitive psychology, neurophysiology, and linguistics. In this respect some interesting work is being performed. Arguably, the both approaches are still suffering the consequences of inflated claims made in the 1960s and 1970s about what could be delivered.^12

12) There are numerous stories about failed A.I. systems, e.g. in the 1970s the U.S. Intelligence agencies funded a project for determining geo-political strategies; on being asked to draw a conclusion from the three statements: `Russia is a communist dictatorship'; `Russia is hostile to the U.S.A.' and `Cuba is a communist dictatorship', the system responded `Cuba is in Russia'.

1.5. Non-traditional approaches to computation

Non-traditional, or more properly non-von Neumann^13

13) After the Hungarian born, U.S. computer scientist John von Neumann (1903-57) who is credited with the first formal description of the structure of general-purpose computers.

computer models, have become established as a core research area in computer science in the last 10 years. Four sub-topics within this field are currently the subject of much attention:

A. Neural Networks

The theory behind these developed from biological models of the working of the human brain. Such networks have been successfully applied to deal with categories of pattern recognition problems, e.g. voice recognition. One of the computationally interesting aspects of the neural network approach is the fact that neural networks are customised by being `trained' with examples of the objects to be recognised. Thus the internal characteristics of the network are modified, sometimes by the network itself, until a satisfactory performance level is attained. Investigation of learning and training approaches forms an important facet of work on neural networks. In addition advances in the physical construction of computer systems have created the possibility of building very large neural networks inexpensively.

B. Genetic algorithms

Genetic algorithms provide another example of ideas from biology being translated into computational analogues. The idea underlying these is to emulate the evolution of DNA sequences as a means of solving computational problems. Thus the processes by which DNA strings combine and evolve are mirrored by symbol manipulation operations on sequences of symbols. Some promising approaches have been developed for some difficult optimisation problems. Research in this area, at present, is concerned with developing the theoretical basis of these methods (which is currently not well understood) and with extending its applicability.

C. Quantum Computation

Whereas neural networks and genetic algorithms have come about through importing ideas from biology, quantum computation has its origins in developments in physics - specifically exploiting quantum level effects as a computational device. The theoretical properties of quantum computers were first described by the physicist David Deutsch^14 by whom the possibility that these may be considerably faster than classical machines was raised.

14) In, Deutsch, D: `Quantum Theory, the Church-Turing principle and the universal quantum computer; Proc. Royal Soc. of London, Series A, (400), pp. 97-117 (1985). The concluding section of this paper gives a novel, if somewhat technical, discussion of the relationship between computer science and modern theoretical physics.

At present much of the current research on this model is of a highly theoretical nature (hindered by the fact that a number of publications mis-report Deutsch's conclusions). There is, however, some work in progress concerning the feasibility of constructing quantum computers.

D. Parallel Computers

The methods described above may be seen as particular special cases of a more general class of non-von Neumann computers: parallel computers. In these a (potentially very large) number of individual computers are `linked' together in order to provide a more powerful machine. There are many significant research problems being addressed in this field at present: developing parallel machines for a specific applications; designing methodologies for programming on parallel computers; analysing the efficiency of parallel solutions; etc. This is an area that has only really begun to develop within the last ten years and is likely to pose important questions in Computer Science for a number of years to come.

1.6. Mathematical Theory of Computation

This group of subjects forms one of the oldest traditional research concerns in Computer Science. Its origin (arguably in 1900^15) predates the appearance of the first modern computer systems by almost fifty years.

15) I have chosen this date from the presentation of David Hilbert's lecture `Mathematical Problems', given during the International Congress of Mathematicians at Paris in 1900, the text of which was published in Archiv der Mathematik und Physik, (1), pp. 44-63, 213-237 (1901). Hilbert, in this lecture, presented a collection of 23 open problems in mathematics, the second of which - Die Entscheidungsproblem - poses the challenge of constructing a specific algorithm (i.e. program).

The general aim of this field is to address questions in the design of computer systems and programs from a formal mathematical perspective. Within this area fall three studies of principal interest:

A. Semantics of programming languages

Informally this area is concerned with how to attach precise `meanings' to constructs that occur in computer programs. The motivation behind this is twofold: if the actual behaviour of a program can be defined in a rigorous enough manner then the problem of showing that the program fulfills a particular function reduces to that of mathematically proving that the semantics of the program accord with the intended functionality. This objective is the goal of the Formal Verification theory. Formal verification was first mooted in the early 1970s by, among others, the Dutch mathematician Dijskstra, the English computer scientist Hoare, and the U.S. computer scientist Floyd. The other motivation for the theory of program semantics is that if precise meanings can be attached to programs then it becomes possible to specify formally the functionality a program should achieve. Formal specification concerns the construction of methods by which the requirements of a system can be described precisely and unambiguously. While some success has been achieved with the design of specification systems (most notably at Manchester and Oxford) the existing formal verification techniques have yet to achieve any great level of sophistication. Programming semantics is one of the internationally recognised areas of expertise in British computer science.

Computability Theory

Computability theory is concerned with the classification of which problems can and cannot be solved by computer programs. The first significant work in this field dates back to the 1930's when concepts of what constitutes a valid computational system were proposed by Emil Post, Alan Turing, Stephen Kleene, and many others. There has been an increased revival of work in this field, partly as a consequence of the developments in non-von Neumann models discussed earlier. Some of these models challenge the premises which are used in the proofs that certain problems cannot be solved. It is, currently, unclear whether the models with the strongest claims in this area would actually be constructible in practice.

Computational Complexity Theory

While computability theory is concerned with the question of which problems can be solved by computers, computational complexity is concerned with which problems can be solved efficiently. The term `complexity theory' was first coined in the mid-1960s by Hennie and Stearns, but related work in this field has been in progress since the 1930s, specifically the work of Shannon, Shestakov, and Lupanov concerning building efficient `hardware'. Several of the most important open questions in Computer Science and mathematics are ones that have been raised as a consequence of work in this area, the most widely studied of which has been the P =? NP question first formulated by Steven Cook in 1973. Briefly this asks whether a specific class of `decision problems' can be solved by `efficient' programs; a positive answer (which is not expected by most experts in the field) would have considerable implications for a very large number of applications areas. This field is another area of Computer Science in which Britain has a considerable international reputation.

1.7. Summary

Computer technology has developed to a considerable degree over the last 20 years. This has resulted in certain applications of computers impinging on various aspects of individual's day-to-day lives. As a scientific concern research interests in Computer Science have led to an interaction between older scientific disciplines such as Biology, Psychology, and Physics. Unfortunately, it is still the case that the advent of increased computer application has met with either public concern and suspicion or with over-optimistic beliefs about what computers can do.

Next Section

PED Home Page