Writing

Essays | Books

Is the brain a computer and the mind a program?

An investigation into the computational model of the mind

This question lies at the heart of the functionalist view of the mind. Can our hopes, desires, beliefs and intentions be explained purely in terms of formal information processing or does this leave out much that is still mysterious and ephemeral? Are rationality and self-consciousness no more than recent innovations of a mindless evolution, designed to better our genes’ chances of survival - a smart move in design space? And, if we accept that the laws of physics are our only analytical tools, then how are we to account for the elusive qualia: the smell of this coffee, the colour of that rose, the scent of her hair?

This essay examines whether the computational model of the mind presents a plausible and coherent explanation of mentality. We describe a functional model of the mind and ask if the sensory phenomena can also be modelled computationally. Our initial stance will treat the brain as a classical device. Later we ask whether some aspects of consciousness require an explanation involving quantum-level phenomena.

Firstly however, a few definitions: what is a mind, what are computers, and how might they be related?

The Mind

The mind is our first-person perspective on the world - that set of mental states with which we are intimately and consciously aware: beliefs, desires, hopes, fears, dreams, deductions, intentions etc. Uniquely, the mind has intentionality - mental states have aboutness and are directed at some object. In addition, the mind exhibits a dynamic evolution over time: mental states are perpetually created, modified and destroyed as part of the continuous process we call consciousness. The mind also has contingent agency - mental states can be created by sensory input and can cause effects in the external world through triggering actions in the body. The mind is that entity within which a host of concepts and impressions compete for ascendancy. As Marvin Minsky says , the mind is what the brain does: a device for making the right decisions about how to stay alive to eat another day.

Whereas the unconscious mind undoubtedly has mental states, it might be regarded as a biochemical feedback control system: a pre-processor of sensory data prior to presentation to the conscious mind and a controller for the motor systems of the body. Many lower animals with just a few hundred neurons (invertebrates, insects etc.) are generally agreed to be mind-less and yet exhibit complex and apparently thoughtful behaviour. The Nagel-esque question ‘what is it like to be an ant?’ is then meaningless - only a minded entity can be.

The mind-body problem, if one accepts it at all, only has bearing on the conscious mind. The task of the materialist is to suggest a physical mechanism whereby inanimate matter can give rise to the richness of our conscious mental experience.

Functionalism

Functionalism is the prevailing materialist thesis which seeks to explain the mind in terms of the functional roles played by a set of representational states. Mental states are to be explained as some appropriate mapping between physical brain states and have representation by virtue of their functional relationship to each other, the sensory states and the contingent actions (behavior) which follow. Functionalism refines the crude ‘black box’ model of behaviorism (which essentially denies mentality at all) into a block model, where the boxes represent functional roles within the mind. Functionalism is therefore behaviorism applied at a much lower level of abstraction and provides an essentially static view. To understand the dynamic behavior we must consider the kind of mental symbols represented and how they interrelate to create causation. This is the approach taken in the Computational Model of Mind.

Computers and Turing Machines

To explain this we must be more precise about what we mean by the term ‘computer’. Today the accepted definition is based on the concept of the Turing machine : a thought experiment devised by Alan Turing in the late 1930’s as a means of mechanising the proof of mathematical theorems. Put simply, a Turing machine is an automaton which has a finite set of internal states, an infinite ‘tape’ passing through a read/write head, and a table of rules which decide which state to go to next and which action to take: write a new symbol, erase the current symbol or move the tape left or right, or stop. Through a suitable encoding of symbols and machine table, a given Turing machine can be made to perform any conceivable information processing task which is computable, i.e. can be expressed in the form of a sequence of simple steps applied according to a ‘recipe’ or algorithm. Examples range from long division through to playing chess or finding the prime factors of large integers.

The behaviour of each Turing machine is determined explicitly by its machine table. Turing’s major leap of imagination was to allow the rule table of any given Turing machine to be encoded onto the input tape. Another Turing machine with an appropriately complex rule table could then read the tape, internalise the machine description and then simulate the Turing machine described on the tape. This was the Universal Turing Machine which lies at the heart of all modern digital computers and also the computational model of the mind.

The Computational Model of Mind

The computational model says that what the mind principally does is to ‘process symbolic information’ in a way analogous to a digital computer. The mind has internal representational states which are isomorphic to the objects of its intentionality. These states are given functional and logical roles, juxtaposed and manipulated by a fixed set of underlying rules. Viewed externally, the brain exhibits sophisticated adaptive behaviour as if it is following some complex set of logical rules which fire one after the other as their premises match the mental states created by sensory input and contingent internal events. This is just what a computer does. Is it reasonable then to model the brain as the hardware substrate on which the software of the mind is executed?

The base currency of the mind is a set of (very many) predicates - propositions which have some truth value. These can be atomic facts, ‘tomatoes are red’ or ‘John is a man’ for example, or implications of the form X => Y. Predicates participate in differing logical relationships by virtue of functional roles assigned to them - the propositional attitudes. These have a general form: {attitude} that {proposition}, where {attitude} describes the functional role of the proposition: ‘I believe that…’, ‘I hope that…’, ‘I intend that…’ etc. One way of modeling the mind then is to represent each type of attitude as a labeled ‘box’ into which the predicate symbols are placed at varying times according to their functional roles. Causation is a side effect achieved by moving symbols between boxes according to ‘rational rules of thought’.

For example, I might have a set of predicates: ‘drinking liquid quenches thirst’, ‘water is a liquid’, ‘water comes from taps’, ‘glasses hold liquids’ and ‘I am thirsty’. I can create a set of initial mental states by assigning attitudes to these predicates thus :-

BELIEF( drinking liquid quenches thirst )
BELIEF( water comes from taps )
BELIEF( glasses hold liquids )
BELIEF( water is a liquid )
BELIEF( I am thirsty )
BELIEF( X holds Y and Z is a Y => X holds Z )
DESIRE( NOT I am thirsty )

The desire, not to remain thirsty, is logically combined with the set of beliefs to generate a set of intentions (actions) which are designed to achieve the desired state (assigning the attitude BELIEF to the object of the original DESIRE attitude). Hence (ignoring a few intermediate steps) I can generate the new predicate roles:-

INTENTION( fill glass from tap )
INTENTION( drink a glass of water )
BELIEF( NOT I am thirsty )

And hence delete DESIRE( NOT I am thirsty ).

There are two important aspects of the above sleight of mind. Firstly the juggling of predicates between different causal roles is context free, i.e. not dependent upon the meaning of the predicates, only their roles and relations at a syntactic level. Secondly the underlying reductive mechanism is just a relatively small set of logical rules of inference - the most important being modus ponens ( if {A => B} and A are true, then conclude B ). Inference rules alone however do not produce causation – they need to be applied by a higher level goal-driven Prolog -like interpreter. Thus, to achieve goal Y look for a BELIEF of the form X => Y and then (recursively) achieve the new sub-goal X. This regress will eventually ground in some atomic intention – drink a glass of water for example. Viewed externally, such a system will produce the predictable, rational behavior we expect of minds (the intentional stance). The similarity to the operations of a Turing machine is now apparent: both systems have representational symbols (mental states) which are shuffled between roles (propositional attitudes) according to a fixed set of rules (machine table).

This computational view of the mind has a great attraction to many. It places mentality on a firmly materialist basis which should be susceptible to experimental verification and provides an explanation for representation, meaning and rationality. It says nothing however about the phenomenal qualities of mind – how the qualia are manifested and their relation to mental states. Neither does the computational model require consciousness for its operation. Not surprisingly then, it has attracted objections from many sources, including philosophers and computer scientists. We shall briefly examine the main arguments.

The Artificial Intelligence Objection

Artificial Intelligence (AI) has the objective of creating cognitive agents capable of performing tasks requiring human-level intelligence. The ultimate aim of course is to create an artificial self-conscious mentality within a machine - a goal promised to be deliverable well before the end of the century some forty years ago .

Embracing the computational model of mind, AI comes in two strengths:-

  • Strong AI says that mind is just computation and that any functional instantiation of the appropriate symbolic processing (program) will generate the same mental behavior, including the inner phenomenal experience. The key claim is that mindedness is substrate neutral - as long as the internal states, symbols, axioms and inference rules are correctly implemented in the right functional relationship then the physical medium of the system is irrelevant. Thus the mind of the ardent functionalist could in principle be replaced by a digital computer, a system of valves and water pipes, windmills and beer cans, or any contraption capable of executing an (appropriately complex) algorithm.
  • The weak form of AI says that a mind can be simulated by purely computational mechanisms and that such a simulation would exhibit externally all of the behavior of a thinking, minded being. However, whether the simulation would experience understanding, consciousness and inner sensation (qualia) is an open question to be determined empirically.

    Stated thus, AI might be seen as a useful existence proof for the computational model. The early decades of AI were dominated by a top-down symbolic approach based on predicate calculus, backward-chaining and goal-directed theorem proving. Good Old Fashioned AI (GOFAI) had some early successes - largely in the formal fields of mathematical theorem proving (Newell and Simon’s ‘General Problem Solver’) and ‘expert systems’ (Dendral, Prospector, MYCIN etc.) The field became seriously bogged down however when it moved into domains requiring a more general knowledge base about how the physical world works (common sense reasoning). Beyond toy-like ‘blocks world’ domains, these top-down systems quickly ran up against the notorious ‘frame problem’: deciding from its declarative knowledge what is relevant and what remains invariant over operations, or using generalisations from learned examples. Early robots would spend their brief cognitive lives endlessly pattern matching against myriads of irrelevant facts.

    Common sense reasoning is the essence of the frame problem. Many critics believe that mind is not possible intimate association with the body. Intelligence requires ‘the background of common sense that human beings have by virtue of having bodies, interacting skillfully with the material world and being trained in a culture.’ Our ability to reason successfully is due not to declarative knowledge plus inference rules but to procedural know-how derived from experience of living in a physical world. No system of rules, however complex, can ever capture the flexibility and generality of this procedural knowledge and efforts to simulate general intelligence through top-down rules alone are doomed to failure.

    Leaving aside the problem of qualia, the limited success of symbolic AI has surely shown that a symbolic computation model is far too simplistic to account for many features of real minds:-

  • Seemingly effortless reasoning about the physical world in real-time.
  • Reasoning successfully with fuzzy and incomplete information.
  • Adaptability: the ability to learn and generalise from experience to cope with new situations.
  • Robustness: the graceful degradation in performance as parts of the brain are damaged in contrast to the catastrophic failure of brittle computer programs.

    It is left to more holistic computational models based upon connectionism to push the AI agenda forward (see below.)

    The Chinese Room Objection

    The most quoted critic of the computational model is John Searle with his famous Chinese Room. The details of this ingenious thought experiment are well known and will not be repeated here. Searle is agnostic concerning weak-AI but a firm disbeliever in the strong form. His argument is as follows :-

  • 1. Understanding requires meaning.
  • 2. Syntactic manipulation of symbols alone has no intrinsic meaning.
  • 3. Thus: pure syntactic engines such as computer programs have no understanding.
  • 4. Minds have true understanding and thus cannot be computer programs

    Symbols in themselves have no intrinsic meaning or semantics until placed in some interpretive relationship with the real world - a process outside the scope of the program. Thus, by shuffling characters into syntactically correct Chinese, the Chinese Room program may achieve correct results while having no understanding of its success. Searle’s conclusions are based upon some technical misunderstanding of how such a program would in fact be constructed together with misleading assumptions about scale, complexity, and the role of Searle-the-observer.

    Firstly, says Searle, the Searle-in-the-room has no understanding of Chinese by shuffling characters according to a book of English rules (the computer program). But surely this is to be expected - Searle here is playing the role of the computer hardware which only has ‘understanding’ of its basic operational instructions (how to shuffle symbols about the room). The obvious reply is that any understanding is a property of the system as a whole - the formal algorithm together with the autonomous mechanism which provides its interpretation plus its links to and from the external world. A Pentium processor could itself never be intelligent but a Pentium executing some unimaginably complex AI program is (in principle) a different matter.

    Searle’s replies with the spurious contention that the ‘program’, which is stated to be able to pass the Turing test , is just a ‘few scraps of paper’ and thus how could he plus a few scraps of paper suddenly assume understanding. This is to misunderstand (deliberately?) the likely complexity of such a program, the code for which would probably amount to millions of pages of symbols. We are not dealing here with a few scraps of paper!

    Searle’s next move is to internalise the program by memorising it and executing it in his own mind. He still has no understanding of Chinese although the computer program is now ‘part of’ him - and he part of it. Again we are faced with the implausibility of this glib proposal - we are asked to believe that a human brain could assimilate a million-page source program and mentally execute it, keeping track of the simultaneous state of thousands of variables. Given the Turing test pedigree of the program, this would be like trying to assimilate someone else’s mind at a neuronal level and then to somehow ‘become’ them.

    However, suppose we accept Searle’s proposal in principle. Does Searle-as-CPU have any new understanding of Chinese in his own mind? The answer, as Searle asserts, is indeed ‘no’ - his pre-existing mental states (memories, beliefs, histories, associations etc.) are unaffected by (i.e. have no semantic relation to) the superimposed ‘program’ and cannot now have some new understanding. However, it is a very different question as to whether the ‘program’ itself has understanding. The program has its own complex set of ‘mental’ states which are represented in the working ‘variables’ simulated in Searle’s uncomprehending mind together with complex procedural (semantic) links within its own logic. Seale is aware of these states syntactically but they have no understandable semantics within his mind – but they do have semantics within the inner experience of the program. The machine analogy is as follows: Searle’s conscious mind corresponds to the hardware level, capable of executing a set of simple instructions (the rules of the program) without understanding their higher purpose. The program itself constitutes a multiple-level nest of virtual machines which each have a higher-order instruction set implemented in terms of the base mind set. It is then within the states and procedures of this network of co-operating virtual machines that a representational awareness and, arguably, understanding of Chinese exists. To Searle’s mind however this program is a black box whose inner mentality is forever beyond his grasp. This in turn raises the intriguing question as to whether there are already such virtual machines within our own minds - embedded islands of inner-consciousness thinking their own private thoughts.

    To work as planned, Searle’s argument requires us to be able to imagine actually simulating the program in our heads. Any program for which this is possible is necessarily so simple as to have no understanding, almost by definition. Searle asserts that if this is true then how could ’just more of the same’ result in understanding – how can adding material of the same generic nature to the program produce a phase transition in understanding? This is a fallacy which could be applied just as well to the brain – no small enough part of the brain (group or sub-system of neurons) can in itself show understanding and so how can the whole brain (just more of the same) understand Chinese?

    The Missing Qualia Objection

    Another common objection to a mind simulated in silico is the intuitive impression that such a mind, while displaying all of the externally verifiable functional characteristics of the simulated mind, would have no internal experience of qualitative phenomena such as redness, softness, or ‘how it seems to me’-ness. This argument seems to be based on nothing more than incredulity that such feelings can be somehow instantiated in ordinary matter – mere ‘stuff’ as it were. Unless we resort to Descartes’ mystical ‘mind-stuff’ as a container for such qualia, then we must admit that matter is all we have in the case of our own brains. How can a mere bunch of neurons steeped in appropriate neurotransmitters generate the qualia we experience?

    The most famous thought experiment in this vein is the Chinese Brain (Ned Block) - perhaps somewhat misleading because the self-consciousness of each of Chinese persons in the experiment (like Searle-in-the-Room) is totally irrelevant to the argument. As in the Chinese Room, the problem is one of viewpoint: seeing the system at the right level of abstraction to see the concepts and phenomena and not the underlying firing of neurons (or machine-level instructions).

    The Gödelian Objection

    A mathematical objection to the possibility of intelligent computers comes from JR Lucas and most notably Roger Penrose. Lucas first claimed that Gödel’s Incompleteness Theorem somehow places a fundamental limit on the intelligence permissible within any finite state machine realisation of the mind. Penrose extends the argument within a different agenda to arrive at some startling conclusions – much to the righteous indignation of the AI community. The Gödelian objection makes statements about the possible reasoning powers of an algorithmic mind but nothing about its consciousness or phenomenal status.

    Gödel’s theorem states that within any sufficiently powerful formal system of the type realisable by Turing machines there exist true theorems that cannot be proved within the system without the system becoming inconsistent. Such theorems, known as Gödel sentences, are effectively of the form ‘this statement cannot be proved within this system’ and are sophisticated versions of the Liar Paradox.

    The Penrose/Lucas argument is basically as follows :-

    1. All formal systems are subject to Gödel’s Theorem
    2. The human mind is (clearly) not subject to Gödel’s Theorem
    3. Thus the human mind is not a formal algorithmic system i.e. a computer program.

    The main justification here is evidence that the human mind can see the truth of a Gödel sentence which the formal system within which the sentence is expressed cannot. Human minds then can deduce what is provably not computable by machine. Penrose famously embarks here on his homage to the superiority of human mathematicians with their divine sense of mathematical insight. Only some non-computational facility of the mind can explain such capabilities.

    Surely the central fallacy here is to regard the mind in toto as constituting a single formal system which purports even approximately to be complete and consistent in the sense demanded of a Gödel system. Any half-serious examination of the products of evolution will quickly dispel the notion of nature as a designer of great insight or perfection. Despite its fabled elevation as an object of good design, the eye is an object lesson in cack-handedness – witness the neural wiring of the rods and cones going over the surface of the retina rather than being wired from behind. Evolution has equipped us with a mind which is a rag-bag of tricks and smart moves; rules-of-thumb for staying alive long enough to reproduce. The mind is shot through with all sorts of inconsistency and self-contradiction and as such is wholly outside the scope of Gödels’ Theorem. The Gödelian objection does not preclude the mind from being a collection of coupled algorithmic systems as long as we admit (thankfully) to the presence of inconsistency and error .

    Human reasoning provides no guarantee of soundness or freedom from error and yet Penrose insists that this must be a property of any machine intelligence. There is no sound algorithm for winning chess and yet this does not prevent computers wielding heuristic algorithms from beating Grand Masters. As long as the algorithm (for staying alive) is good enough, then a guarantee of its soundness is not essential. To use Dennett’s analogy, Penrose is looking to prove the mind is a skyhook, hoisting humanity (or perhaps just mathematicians) above the level of the machine, while all along human intelligence is no more than another crane engineered by natural selection.

    Is the Mind a Quantum Computer?

    Several contemporary writers have argued that the super-Gödelian rational powers of the mind together with the phenomenon of consciousness cannot be fully explained within the framework of classical physics. If we struggle to see how classical matter, acting according to deterministic laws, can fully account for our mental experience, then does the ghostly quantum world offer any useful insight?

    Michael Lockwood asks why we perceive only a single phenomenal outcome when quantum theory says that reality is actually a superposition of possible outcomes. The act of observation somehow causes only one outcome to appear real to us according to its corresponding probability. This is the famous Measurement Problem of which there are several mainstream accounts: the Copenhagen interpretation, in which the complex wave function instantaneously collapses to one of its component eigenstates, and Everett’s ‘many worlds’ view where each outcome is realised in a separate causal partition (universe) of reality. These interpretations are not however proper explanations – we have no physical laws to explain why this state reduction or universe-splitting should occur. Because the mathematics provides us with a wonderfully accurate predictive method, we have tended to concentrate on the applications of quantum mechanics without becoming too troubled by this missing level in our understanding.

    Lockwood suggests an explanation of our single-threaded conscious perception with his Designated State model. Here a physical part of the brain connected with consciousness becomes entangled (correlated) with particular eigenvalues (outcomes) of some designated set of observables in the wave function of the total quantum state (observer + experiment or the perceiver + the external world). Because this physical brain component stands in some representational role to other correlated symbols in the brain, our consciousness only has direct perception of the single designated eigenstate and hence the world appears to us in its classical single-tracked form. This approach seems to avoid both the empty despair of the Copenhagen view and the extravagance of the many worlds view while requiring (if I have understood it) no new extensions to the underlying quantum theory itself.

    In his desperate bid to escape from the clutches of Gödel, Roger Penrose is looking for an explanation requiring a wholly new theory of quantum gravity. His contention is that the microtubules within the skeletal infrastructure of the brain are of the right scale and energy to support a non-computational quantum coherence effect infusing the whole brain. (Note this is not the same as quantum computation envisaged by workers like Deutsch – these still give computable results, only faster.) Given the fallacy of his initial premise concerning the non-algorithmic nature of mind, this seems an unnecessary, expensive and unconvincing excursus.

    Connectionism

    The above discussion of the symbolic computational model of mind has hopefully shown its inadequacy to fully explain even the rational level of mentality. Furthermore, it totally side-steps the issue of how the phenomenal character is realised in physical matter. There is no realistic likelihood that a human-level mind can even be successfully simulated through a top-down production system firing on a (very large) rulebase. However, it is a very different question as to whether some computational architecture is possible which will generate high level mentality, including consciousness and (in some sense different from our own) inner qualia.

    The connectionist approach provides a bottom-up model of mind which is also computational. (There is no space here to properly explain neural networks etc.) Connectionist architectures are still algorithmic and rule-based but only at the level of individual nodes within the network. Unlike the localised symbols of a classical AI program, concepts and features in a neural network are emergent properties of the whole system – captured within the weights and activation levels of the connections themselves, not in the nodes. Symbolic rule-based reasoning can be implemented at a higher level on top of an underlying connectionist model if needed and this is probably partially what happens when we reason at a conscious level. Critics such as Jerry Fodor argue that connectionism is just a lower level implementation of symbolic processes and thus brings nothing new to the table. Since any computational architecture can be modeled by any other, this seems to miss the point and ignores the unique advantages of such distributed systems. Donald Davidson, a critic of the top-down model, advocates an interpretational approach which is consistent with connectionism: ‘Don’t start with mental states and then look for rational rules of connection – it is the rationalism which is fundamental to what it is to have a mind.’

    The other move needed to render the computational model plausible is to close the Cartesian Theatre - the mind is not a monotonic agent observing a single line of thought from the cheap seats. Minds are complex assemblies of autonomous agents (Minsky uses the phrase Society of Mind), each embodying its own connective maps, rules, declarative axioms, procedural knowledge, meta-rules etc. These agents co-operate and compete within the mind; reactive and proactive bubbles in a turbulant sea of changing connection weights. While these agents communicate freely via some global ‘blackboard’, this is not some centralised area within the brain but is distributed throughout the entire cortex. Consciousness crystalises out of the haze as an emergent property of the whole. Just as self-organising systems are entirely deterministic and yet exhibit complex chaotic behavior, so our minds are realised from intrinsically simple computational elements.

    Conclusion

    I believe there is a computational model of mind which works and which addresses the objections presented here. It is not a simple model nor is it even near to realisation. It is a synthesis of many disciplines and architectures: von Neumann in parts, highly parallel in others. However, I see no problem in principle with its eventual construction within some appropriate computing medium. By its very nature however, when one day this feat is achieved, there will be no reason why we should have any more understanding of its detailed operation than we do our own minds.

    References

    1. Margaret Bowden, 1990, The Philosophy of AI
    2. Danniel Dennett, 1991, Consciousness Explained
    3. Danniel Dennett, 1995, Darwins Dangerous Idea
    4. Roger Penrose, 1990, The Emperor’s New Mind
    5. Roger Penrose, 1995, Shadows of the Mind
    6. Dreyfus, 1979, What Computers Can’t Do
    7. Michael Lockwood, 1989, Mind, Brain and the Quantum
    8. John Searle, Minds, Brains and Science
    9. Marvin Minsky, 1985, Society of Mind
    10. Tim Crane, 1995, The Mechanical Mind