Leonardo Digital Reviews
 LDR Home  Index/Search  Leonardo On-Line  About Leonardo  Whats New








Reviewer Biography

Books

CDs

Events/Exhibits

Film/Video

Extending Consciousness and Machines

a review article by Robert Pepperell

How to Build a Mind: Toward Machines with Imagination
by Igor Aleksander
(Uncorrected proof)
Columbia University Press, NY. U.S.A. 2001.
192 pp., illus b/w. $24.95
ISBN: 0-231-12014-1.

and

Consciousness and its Place in Nature: Toward a Science of Consciousness Conference, Skövde, Sweden, August 7-11 2001

Robert Pepperell,
University of Wales College, Newport, Wales.
pepperell@ntlworld.com


(For a comparative review of "How to Build a Mind", see Curtis Karnow's review of the same book.)

It was fortunate that I was offered an uncorrected proof copy of "How to Build a Mind" to review just prior to attending the "Toward a Science of Consciousness" conference where both Igor Aleksander and I were to speak. It gave me the opportunity to discuss many issues arising from the book directly with its author in an atmosphere charged with debates about the very nature of consciousness. In reviewing this book I have tried to give the reader a sense of that atmosphere as well as a flavour of some of the debates themselves.

"Could a machine think?" pondered Ludwig Wittgenstein, just at the point in the mid-1940s when the construction of an "electronic brain" seemed theoretically possible. The broad implications of this question have fuelled one of the most heated debates in contemporary thought - the nature of consciousness and how it might be mechanised. Igor Aleksander is no newcomer to the field or the debate. As this partly autobiographical book makes clear, he has been actively concerned with modelling the mind in machines since the early 1960s and is probably best known today for his work with neural networks at Imperial College, London. Over that time he has been writing prodigiously on the subject of artificial intelligence, with many books and respected papers to his name. "How to Build a Mind" is clearly an attempt to popularise a subject that, to many outsiders, is technically and philosophically complex. But the book is also intended as a serious intervention in the debate from someone who, despite the technical focus of his work, seems to want to frame the 'hard' problem of engineering a mind in the 'squidgier' problems of human experience, philosophy and, in particular, the imagination.

First we discussed my concern that the main title could be regarded as at best over-optimistic or at worst misleading with its echoes of some unfortunately titled books such as Daniel Dennett's "Consciousness Explained". One senses the pressures that commercial publishers exert in the interests of stoking controversy and gaining attention. It seems that Aleksander would be more comfortable with the less emphatic subtitle "Toward Machines with Imagination"; a title that certainly summarises the aim of his current research, although some may argue even the claim implicit in this phrase is premature.

Aleksander makes the point early in the book that he wishes to shift the locus of the discussion away from the concept of consciousness towards the idea of imagination. He is looking for the "force of consciousness in the power of the imagination. I need to understand how my brain, an evolved machine of awesome complexity, can provide me with not only pleasurable reverie but also all the other elements of my mental life." (p. 2). This extract characterises what perhaps is unique about Aleksander's approach. He offers a combination of pragmatic mechanism, derived from his background in engineering, with a profound respect for, and curiosity about, the more ephemeral aspects of human existence such as art, philosophy and mind. He is critically aware of the pitfalls and blind-alleys into which discussions about artificial intelligence can stumble and his own position, evolved over years of research and contemplation, is lucid whilst not overly prescriptive. For example, he agreed with me that cognitive science has led some researchers to confuse the thing being modelled (the brain) with the model itself (the computer). But he is also clear about the advantages of the methodology of modelling the neural functions of the brain in software: "We use computers to get to grips with the complexities of neural structures in much the same way that a weather forecaster uses computers to get to grips with the complexities of the weather. Nobody complains about the latter on the grounds that "computers cannot be the weather"; they only complain if it rains when fair weather is forecast." (p. 172). In the same way that meteorological models might be able to predict hurricanes and save human lives, he argues neural models may have medical applications that help to relieve human suffering. Much of the work he currently does is funded by the Wellcome Trust medical foundation with a view to potential treatment of neurological disorders.

I pressed him on some of the more ethically troubling questions that have been implicated in AI research over the years, particularly the involvement of the military and the extent to which we are willing to hand over responsibility to machines for their own conduct. He offers an entirely practical constraint on the 'out of control' scenario in which researchers might design machines that are longer accountable to human operators: "I think that would be thoroughly irresponsible and, in fact, it would be against standard industrial engineering legislature. Anything that has the ability to interact with humanity has to be certified. This is the argument I always have with Kevin Warwick. He says, "Things can get out of hand. Maybe you don't want to relinquish responsibility but they [the military] will take it away from you, they'll build these things that will go around killing everybody." Now, the military do have a mandate to build things that kill people, but that has its own legislature. If they wanted to destroy Moscow it would be far more difficult to build a conscious robot to do it than just drop a missile."

One of the most contentious questions addressed during the conference was the location of consciousness - more specifically whether or not it was located in the brain, or the degree to which it might be so. Many of the eminent invited speakers who addressed the question were emphatic that consciousness is specifically a product of the brain and were swift to dismiss alternative views. However, there was a significant minority that resisted this dominant position and the consequent arguments were, for me, amongst the most stimulating of the conference. The question seems to turn on the extent to which one recognises anything other than the brain as necessary to consciousness, in particular the body (with all its sensory feedback) and the environment (with all its active stimuli). In other words, the brain is obviously a necessary condition of consciousness, but is it sufficient? Again Aleksander treads a pragmatic path between the two extremes of this debate. Although he does not address the question directly in "How to Build a Mind" the book does contain a summary of his previous book "Impossible Minds" in which he is fairly explicit about the minimum conditions required for a conscious entity: "One of the pillars of '"Impossible Minds" is that anything that is conscious must have some connection with world events or juxtapositions of world events." (p. 154). He is also aware of the flaws in the so-called "brain in a vat" model which downplays or ignores the feedback between the brain and the body. Thus, in his current research aimed at building a realistic model of consciousness he is using programmed neural nets in active robots in order to simulate the ongoing experience of an agent that is conscious of, and responsive to, a dynamic world. To my mind such an approach puts Aleksander firmly on the "extensionist" side of the argument (although I doubt he would use the term). What's more, his emphasis on imagination also implies some sort of emotional or visceral constituent to consciousness, which in turn implies the co-operation of a functioning body. I suggested to Aleksander that his views were sympathetic to those who saw consciousness as a distributed rather than localised phenomenon: "I wouldn't have started talking about that by saying 'where is consciousness located?' but more like what does consciousness involve, or need, to exist. I see it as something that does emanate from the brain and to us our consciousness feels like a single point event in our head and then everything we experience out there, other people, political systems, whatever, is a way in which this thinking we do, which is just the firing of neurones, reaches out way beyond the confines of our brain. It's this 'out-thereness' which I find totally fascinating. I don't believe in brains in vats."

Perhaps Aleksander's most original contribution to current ideas about machines and consciousness is his foregrounding of imagination, which provides the main thesis of this book. Thus he restates Wittgenstein's question "could a machine think?" as "can a machine imagine?" (p.3). He goes on: "The answer will not be revealed in the next paragraph or two but, hopefully, will begin to emerge by the end of the book." Whilst much of the book is concerned with sketching the historical, philosophical and technical context from which the AI debate emerges, the last few chapters attempt to attack the problem of consciousness directly with a theory of mind based on "ego-centeredness". For Aleksander this means a neural area in a brain simulation that "coherently represents the world from the point of view of the observer. This receives signals both through visual channels and from the muscular activities of the system, giving it the capacity to reconstruct objects as they exist in the world but as seen from the point of view of the observer.The ego-centered area represents the world as it appears to be as an extension of oneself" (p. 158). Without giving a technical explanation he goes on to claim that such an area is also capable of imaginative manipulation: "Indeed this system is capable of imagining "a blue banana with red spots" even if such an object has never been part of its learning experience. The way in which this happens is that the words stimulate specialist sensor-centered areas that represent blueness, red spottedness, and banananess, while the ego-centered world area does the rest." (p. 159). Hence the ability that some of Aleksander's research machines apparently share with humans of being able to "see things that are not there" or "things they have never seen". I asked him about the relationship between imagination and consciousness: "I see imagination as a major ingredient of consciousness. It's the most beautiful part of consciousness and it's the thing that I wanted to write the book about." Those outside the closed world of cognitive science should surely welcome this interest in the more aesthetic tendencies of human thought from such a prominent mechanist.

Aleksander's machines (dubbed with names such as WISARD and MAGNUS) offer compelling evidence of the power of computer systems to mimic human behaviour, even that which seems most un-computer-like. Perhaps what is less obvious to Aleksander's overall case as it is presented here is the role of language in this whole system and how words can give rise to pictures. He distances himself from Wittgenstein's early "picture theory of the mind" in which mental images might be seen as merely illustrating words (p. 169). Instead he offers a more comprehensive view of conscious experience which might include all the other sensory qualities pertaining to a thing such as, for example, a 'cup'. But I for one am not convinced that the correlation between a verbal description and a mental image is as straightforward as the "blue banana" example might suggest. Contrary to what several speakers at the conference claimed, I am not able to close my eyes and conjure up an image of a blue banana or a red lemon in anything but the foggiest way. I can certainly conceive of such objects but I do not perceive them as sharp, bright pictures in the sense implied by Aleksander and others. In order to experience a dream-like pictorial lucidity, it seems to me, one must be either asleep or in such as deep state of relaxation as to be almost oblivious to verbal stimuli. The obvious exception is waking hallucination, which is not directly addressed in this book but which might actually be closer to what is happening in the computer simulations described. As I raised this problem Aleksander mentioned the common example given by visual working memory psychologists which demonstrates that one can count the windows of one's house whilst attending to some other task, e.g. listening to a lecture. To do this one does not have to have a perfect depiction of the house since the fact of attending to the windows distorts the 'picture' entirely. Yet I remain unconvinced that such 'visualisation' (which I'd prefer to call conception) is primarily a 'visual process' in the sense implied at the end of chapter 11 where Chris Koch and Francis Crick's work on the anatomy of the visual system is cited. It may be that such 'visualisations' are as much linguistic constructions as they are apparitions in the visual apparatus. In neurobiological terms it would be interesting to look at any data pertaining to the quality of imagination of individuals who have impaired visual function and to determine whether or not, for example, they could complete a similar counting task based on experience of the sense of touch.

"How to Build a Mind" raised a number of other points that stimulated our discussion such as the limitations of digital systems for modelling reality, Zen theories of mind, feedback loops and internal states of networks, mathematical recursion and drug induced hallucinations. This is some indication of the fact that, although quite short, the book holds a great of ideas and offers a rich set of possible connections to be explored. It is also reflective of the multi-disciplinary approach to the problem of consciousness that the conference itself was seeking to foster. Yet the book has structural weaknesses that, in my view, could have been minimised with more careful editing. In particular the regular insertion of imagined dialogues with philosophical figures is sometimes illuminating, as when discussing Kant (p. 81), whilst at other times strange, as when discussing Thales (p. 17). The last couple of chapters, which contain the bulk of Aleksander's current thesis, seem more compacted and opaque than the rest of the book, perhaps not surprising as he tries to present a complex set of ideas within a few pages. Setting aside these deficiencies I felt "How to Build a Mind" strongly reflected the author's mixture of pragmatism and inquisitiveness. This is not a book of philosophy yet it has, I believe, worthwhile philosophical implications. It is not really even a book about computers since, for Aleksander, the computer simulation is just a means to an end. Rather it is about purposeful inquiry in to the nature of the human mind. As such it indicates the extent to which our thought is currently understood, and more importantly the much greater extent to which it remains unknown.

top







Updated 5 January 2002.




Contact LDR: ldr@leonardo.org

Contact Leonardo: isast@leonardo.info


copyright © 2002 ISAST