USC's Michael Arbib

October 3, 2005
Technology Research News Editor Eric Smalley carried out an email conversation with Michael Arbib, the Fletcher Jones Professor of Computer Science and a Professor of Biological Sciences, Biomedical Engineering, Electrical Engineering, and Neuroscience and Psychology at the University of Southern California (USC) in September 2005. The exchange touched on computing matter, the action-perception cycle, imagining tea with grandmother, passionate robots, transferring brain settings, the Mirror System Hypothesis, Hurricane Katrina, universal health care, and Goethe.

Arbib was born in England in 1940, grew up in Australia, and earned a Bachelor of Science in Pure Mathematics from Sydney University and a doctorate in Mathematics from MIT in 1963. He spent five years at Stanford University, then became chairman of the Department of Computer and Information Science at the University of Massachusetts at Amherst in 1970. Arbib has been at USC since 1986.

Throughout his career Arbib has encouraged an interdisciplinary environment where computer scientists and engineers can talk to neuroscientists and cognitive scientists.

At the University of Massachusetts he helped found the Center for Systems Neuroscience, the Cognitive Science Program, and the Laboratory for Perceptual Robotics.

At the University of Southern California, he founded the Center for Neural Engineering and the USC Brain Project, an interdisciplinary project in neuroinformatics. He has helped develop the Neural Simulation Language (NSL) and the Action Recognition Database (ARDB), and is extracting lessons from the analysis of vertebrate brain organization for the design of novel computer architectures integrating learning, cooperative computation and perceptual robotics.

Arbib is the author, co-author or editor of more than 30 books, including Brains, Machines, and Mathematics, which points out that there is much to learn about machines from studying brains, and much to learn about brains from studying machines; Computers and the Cybernetic Society, which covers the social implications of computer science; The Handbook of Brain Theory and Neural Networks, which brings together detailed neuronal function studies, system models of brain regions, connectionist models of psychology and linguistics, and mathematical, biological and applied studies of learning; and Neural Organization: Structure, Function, and Dynamics, which provides a comprehensive view of how the brain works.


TRN: What got you interested in science and technology?

Arbib: The earliest relevant memories are from when I was seven, just before or after my family emigrated from England to New Zealand (we moved to Sydney 2 years later).

I had a Meccano set (using nuts and bolts to assemble pieces of metal and gears and pulleys to make machines), and subscribed to the Meccano Magazine.

I remember being sufficiently intrigued by an article on the V-2 rocket to give a talk in class on the topic. And I liked arithmetic enough to calculate the number of seconds in a century -- no great feat, but the sort of interest that got teachers to give me books to read that deepened my interest in science. (A Mr. Smith in New Zealand gave me a book on astronomy, and when I was 11 my mathematics teacher Fred Pollock lent me Mathematics and the Imagination).

During high school and University I thought of myself more as a mathematician than as a scientist, but when I was 19 (plus or minus) a friend introduced me to Norbert Wiener's Cybernetics: Communication and Control in the Animal and the Machine, and my course was set. I went to MIT to do my PhD and wrote my first book Brains, Machines, and Mathematics one summer vacation, during which I spent the winter term lecturing at the University of New South Wales in Sydney.

TRN: What are the important or significant trends you see in science and technology research overall?

Arbib: I really wouldn't claim to see trends overall. My own interest is in the transfer of insights between the study of the brain and the design of machines but I'm mindful of how insights into the genetic control of development open up a new chapter in computation where we go from manipulating bits to symbols to graphics and on to robot control -- and can expect computation to in future to be understood to include the dynamic shaping of matter.

TRN: You study the neural nuts and bolts of perception and you also study how we construct our sense of reality. How closely related are these two things?

Arbib: I don't see the difference -- though I stress that my understanding of perception is "action-oriented". In other words, perceptual schemas ("schemas" are the functional units of the brain's activity; one of my challenges is to understand how they are played out across the structures of the brain) and motor schemas are inextricably intertwined in what Ulrich Neisser called the action-perception cycle: what we perceive guides our actions; how we act provides new sensory data which our perception takes to confirm or disconfirm our expectations. Reality is the pattern of consistency that emerges between perception and action.

Of course, as humans, this basic experience is enriched by our ability to use language to compare experience, and theories about experience, and this reshapes our construction of reality, too. (Mary Hesse and I gave the Gifford Lectures in Natural Theology in 1983, published as The Construction of Reality by Cambridge University Press in 1986. We developed an epistemology which reconciled "schemas in the head" and "schemas as social constructs" -- while agreeing to disagree on the nature of free will and the existence of God.)

TRN: Functional MRI has captured the public's attention with claims for the ability to detect lying and other behaviors. It seems like a useful research and diagnostic tool, but it also seems to be a bit of a blunt instrument when you consider the incredible complexity of the human brain at the neural and synaptic level.

How do you bridge what seems like a tremendous gap between neurobiology and behavior and consciousness?

Arbib: My group invented Synthetic PET imaging method just over 10 years ago (since extended to Synthetic fMRI) to "fill in the gaps" by relating human brain imaging to the detailed information we have on neural circuitry in animal brains.

We used neural models, implemented on the computer and based on primate neurophysiology, to predict and analyze results from human brain imaging using PET. The key hypothesis was that PET is correlated with the integrated synaptic activity in a region, and thus reflects in part neural activity in regions afferent to the region studied, not just intrinsic neural activity in the region itself.

Increasing precision in brain imaging in recent years now makes it possible to gather useful MRI and PET data in the macaque monkey. The virtue of such advances is that they make possible a bridge from macaque neurophysiology via macaque imaging to human imaging, where synthetic brain imaging formerly had to use homology to bridge the two ends directly, and thus with less supporting data.

Anyway, this leads to the following modeling strategy: Develop models of monkey circuitry rooted in detailed neurophysiology and neuroanatomy; determine which regions of the human brain are homologous to which regions of the monkey brain; and then extend these homologies to generate models of human circuitry which either strongly resemble the monkey model (high degree of homology) or can be viewed as an informed variant of such a model (low degree of homology).

In each case, the resultant models of interacting brain regions in the human are to be tested by brain imaging and/or clinical data.

TRN: What emerging tools are giving us a better understanding of the brain and mind, and what tools would you like to use that have not been developed yet?

Arbib: As noted above, I see the challenge to be to take advantage of the advances in data gathering as they occur -- whether increased precision in brain imaging, or more detailed analysis of behavior in humans; or both these but also multi-electrode or optical dye recordings in animals -- by creating new data bases and modeling tools (neuroinformatics) to make sense of this new flood of data.

I'm also very interested in evolution of the human brain (especially what makes it possible for us to use language) and realize that there is a huge body of work on the chemistry, molecular biology and genetics of the brain about which I must learn much more.

TRN: Thomas Dawson of Sony was recently granted two patents (Method and system for generating sensory data onto the human neural cortex, and Method and system for generating sensory data onto the human neural cortex) that define a way to induce sensory experiences by beaming ultrasound pulses into the neural cortex in specific patterns. The idea seems to be to directly and precisely control neural behavior to produce a virtual reality.

I can imagine the ability to induce the experience of sounds, colors and flashes of light, but are coherent, life-like images even remotely possible? Setting aside the issue of how to produce specific effects, is the mechanism even plausible?

Arbib: I haven't studied the patents, so I apologize to Mr. Dawson if his patents answer my doubts. However, I think that the detailed wiring of the human brain that supports our perceptions is totally idiosyncratic from person to person.

Thus, near the periphery -- as you say -- one can expect a certain stimulation to yield a basic sensory or motor experience, but I don't think you can come up with a one-size-fits all stimulus that could, for example, make every subject imagine having tea with grandmother. We do know that the success of cochlear implants rests both on the ability of the implant to provide stimuli across the sound frequency spectrum -- but also on the subject's brain's ability to adapt itself to this somewhat distorted sensory coding to make increasing perceptual sense of the input.

So individuals might be trained to associate certain patterns of input a la Dawson with particular thoughts, images or actions, and this might generalize somewhat -- but I cannot see going further with near-term technology.

TRN: Context -- the body, the physical environment, society -- seems to play a critical role in shaping consciousness and intelligence. What does this mean for building artificial intelligences? Will we be able to relate to truly intelligent machines?

Arbib: Jean-Marc Fellous and I have recently edited a book: Who needs emotions? The brain meets the robot. (Oxford University Press, 2005). Ever since Darwin published The Expression of the Emotions in Man and Animals it has been agreed that, no matter what may be their uniquely human aspects, emotions in some sense may be attributed to a wide range of animals and studied within the unifying framework of evolutionary theory.

One aim of the book was to probe the inner workings of the brain that accompany the range of human and animal emotions, and place these brain mechanisms in an evolutionary perspective.

Another aim of the book was to bring Artificial Intelligence (AI) together with the study of emotion. We saw the key division to be between creating robots or computers that "really" have emotions and creating those that exhibit the appearance of emotion through, e.g., having a "face" that can mimic human emotional expressions or a "voice" that can be given human-like intonations.

To see the distinction, consider receiving a delightful present and smiling spontaneously with pleasure as against receiving an unsatisfactory present and forcing a smile so as not to disappoint the giver.

My own chapter was called "Beware the Passionate Robot", noting that almost all of the book stresses the positive contribution of emotions whereas personal experience shows that emotions "can get the better of one". I tried to address the issue of whether and how to characterize emotions in such a way that one might say that a robot has emotions even though they are not empathically linked to human emotions.

I do think that there will be future robots that indeed have emotions -- as high-level indicators of process state that set an overall bias on decision making and condition patterns of communication with others. However, I also think that emotions that are useful (but sometimes harmful) for robots interacting with other robots (imagine a team of autonomous robots responsible for spaceship maintenance on a decades long mission, or a team of agents monitoring the whole Earth for ecosystem evaluation) need not necessarily be similar to the "mammalian humans" that are so much part of human life.

TRN: One of the big challenges in robotics is simply giving machines the ability to accurately perceive their surroundings. What will it take to build machines that can operate effectively in unfamiliar, dynamic environments?

Arbib: One part of the answer, clearly, is that learning will be necessary.

More importantly, perhaps, there is the need to have a set of some sort of high-level descriptors which can rapidly set useful initial conditions for this learning by quickly categorizing which aspects of the surroundings can be explained by schemas acquired elsewhere, thus focusing attention on what is truly novel.

Of course, robots have a big advantage over us -- one may be able to transfer "brain settings" from robot to robot in a way which is impossible for humans.

TRN: Tell me about the Mirror System Hypothesis -- the notion that we owe our language-ready brains to our hands.

Arbib: By the mid-1970s I had developed a theory of perceptual and motor schemas as providing the essential units for "coordinated control programs" that describe the functioning of the brain, and I had developed models of how such schemas might be played out in the neural networks of the frog's brain, explaining "what the frog's eye tells the frog".

In 1979, then, I published a paper with David Caplan, an expert on brain mechanisms of language, showing how schema theory could provide the framework for a computational linguistics founded on the notion of "distributed localization" in which functions defined by the neurologist were more likely to involve the competition and cooperation of multiple brain regions rather than activity localized in a single brain region.

My work on brain mechanisms for the role of vision in motor control in frog, rat, monkey and human has a long history, but a new round of computational modeling started in the early 1990s during my involvement in a succession of collaborations with Marc Jeannerod (Lyon), Giacomo Rizzolatti (Parma) and Hideo Sakata (Tokyo).

My group modeled control of hand movements and their coordination with arm movements both at the schema level to address human behavioral data from Jeannerod's lab and at the level of biologically-constrained neural networks to explain neurophysiological data on macaque monkeys from the Rizzolatti and Sakata labs.

Our colleagues in Parma dramatically changed neuroscience’s understanding of visual control of hand movements when in the mid-1990s they discovered a class of neurons in F5 (a "premotor area" in the macaque's frontal cortex) that discharge not only when the monkey grasped or manipulated objects, but also when the monkey observed the experimenter making a similar gesture. Neurons with this property are called mirror neurons. They require a specific action - whether observed or self-executed - to be triggered.

Our immediate response at USC was to conduct a study (with Scott Grafton taking the lead on human brain imaging at USC) which showed brain regions with mirror properties in three regions of the human brain, with the frontal area being in or near Broca’s area, traditionally thought of as involved in speech production.

Our shared work on analyzing the macaque mirror system and reflecting on our study showing a human "mirror system" in or near Broca’s area, led Rizzolatti and myself to publish in 1998 in Trends in Neuroscience what has become a classic paper. It was titled "Language Within Our Grasp" and introduced the Mirror System Hypothesis (MSH) on the evolution of the mechanisms of the human brain which support language.

In later papers, I developed a general framework, refining the original formulation of MSH, that encompasses seven different stages:

· S1: Grasping
· S2: A mirror system for grasping shared with the common ancestor of human and monkey
· S3: A simple imitation system for grasping shared with the common ancestor of human and chimpanzee
· S4: A complex imitation system for grasping
· S5: Protosign, a manual-based communication system, breaking through the fixed repertoire of primate vocalizations to yield an open repertoire
· S6: Proto-speech, resulting from the ability of control mechanisms evolved for protosign coming to control the vocal apparatus with increasing flexibility
· S7: Language: the change from action-object frames to verb-argument structures to syntax and semantics

Stages S4-S6 are hypothesized to distinguish the hominid line from that of the great apes, while the final stage is claimed to involve little if any biological evolution, but instead to result from cultural evolution (historical change) in Homo sapiens.

A recent paper [Arbib, M.A., 2005, From Monkey-like Action Recognition to Human Language: An Evolutionary Framework for Neurolinguistics, Behavioral and Brain Sciences, 28:105-167 (including Commentaries and the Author’s Response; further Commentaries and Author’s Response).] develops the argument further and responds in detail to numerous Commentaries to lay a firm foundation for further research on MSH, pro or con.

Issues include responses to arguments for a "speech only" view of language evolution; and arguments which show how the study of language processing may be embedded within a more general analysis of brain mechanisms for goal-directed action.

TRN: What are the important social questions related to today's cutting-edge technologies?

Arbib: Perhaps the key point is to turn the question round and get our heads straight about what are the important social questions, then ask where the solutions lie -- whether in education, technology, politics or elsewhere.

If we consider the fate of New Orleans with Hurricane Katrina, we can certainly see challenges for technology in terms of better design and maintenance of levees, or in communication systems, but we also see the fruits of pork-barrel politics, lack of planning and coordination (technology can help, but one needs bright dedicated people to make use of it), and acceptance of a status quo in which too many people live in poverty.

TRN: In terms of technology and anything affected by technology, what will be different about our world in five years? In 10? In 50? What will have surprised us in 10 years, in 50?

Arbib: The most surprising thing about technology is how readily we accept it. It's less than 10 years since www.thisandthat.com first appeared in advertisements (I made that up as a generic name but just checked -- the URL is in use for an actual website) and now we (or, at least, a very large number of citizens of the developed world) cannot imagine not having this link so we can move from the ad to real information about the product.

A few years ago, someone walking down the street talking to themselves struck us as crazy -- now we assume they are talking on a cell phone.

I've talked about the mix of technology and politics. I think one major technological break through will combine molecular biology and medicine and nanotechnology and computer-aided diagnosis -- learning how to read the genome of a person to provide drugs tailored not only to the disease but also to the chemistry of the individual's body, and then knowing how to target the drug to specific organs, and then monitor progress and adjust the dosage accordingly.

The political breakthrough will be to achieve universal health care -- but with priorities (e.g., no heart transplants, with rare and carefully stated exceptions, unless you can fund them from other sources) which keep costs in check, and incentives to get people to look after their health.

TRN: What's the most important piece of advice you can give to a college student who shows interest in science and technology?

Arbib: I give graduate students looking for a topic for a Ph.D. thesis two pieces of advice:

The first is what my Ph.D. supervisor told me when he was concerned that the breadth of my interests distracted me from a focused attack on my thesis. It was a quote from Goethe, roughly translated as "He who would master the infinite should take the finite and master it from all sides." In other words, a well chosen problem is one that is well-focused yet whose solution opens up the understanding of many things.

The other advice is more specific. Neither seek to do a thesis for which your advisor has laid out all the details, nor seek to work too independently. To my taste, the best topics are those defined by the interplay of the student's interests and enthusiasms with the advisor's expertise. It is very satisfying when both advisor and student can learn from each other.

TRN: What books that have some connection to science or technology have impressed you in some way, and why?

Arbib: When I was 11 and each year thereafter in high school, I read and reread Mathematics and the Imagination by Edward Kasner and James Newman. It introduced me to the mathematical treatment of infinity and to the idea of topology and much more -- including the word "google" when it still meant 1 followed by 100 zeros. Even though the book is now somewhat dated, I think any high school student interested in mathematics would be stimulated by this book (still available from Dover Publications).

TRN: What other readings do you recommend that would bring about more interest and/or a better understanding of science and technology?

Arbib: I've read so many books -- and not read so many more!! -- that I really don't know where to start on this question. It's certainly worth getting hold of Einstein's own non-technical account of relativity theory. John Allman wrote a fine book on the evolution of the brain [Evolving Brains], with lots of pictures. My own little book from 1964, Brains, Machines, and Mathematics attracted many enthusiastic readers -- it's a better book, being more spontaneous, than the 2nd edition of 1987. And for gripping scientific biography, it's hard to beat James Watson's highly personal account of how he and Francis Crick discovered the double helix structure of DNA [The Double Helix] -- I remember staying up all night to read that one.

TRN: Is there a particular image (or images) related to science or technology that you find particularly compelling or instructive? Why do you like it; why do you find it compelling or instructive?

Arbib: [A] painting by Rene Magritte [Castle in the Pyrenees]. I used it for the cover of my book Computers and the Cybernetic Society. I use Magritte to illustrate ideas about perception. I note that often we see patterns that "are not there" because we interpret our sensations in terms of schemas stored in our brain, and that often we resolve parts of the scene we have not seen clearly by seeking patterns of schemas that match our expectations about the world.

Magritte is a surrealist who forces us to see the discomfiting and unexpected by painting parts of the scene so realistically that we cannot doubt what that part of the picture must represent -- and then we find that globally the picture contradicts all our expectations. In a sense, this balance between making good use of what we already know yet being open to new experience is the essence of science -- and perhaps of a full human life more generally.

TRN: What are your interests outside of work, and how do they inform how you understand and think about of science and technology?

Arbib: We live on an interesting planet. I enjoy traveling -- and one of the privileges of being a scientist is that not only do I visit many countries but I have the chance to make friends and get the "inside view" of that country, a view that I can calibrate against the feel for the world I get from reading The Economist each week, perhaps the most literate and global news magazine in the English language, even if its views are not always aligned with my own (but isn't that how we learn?).

I enjoy good food from many cuisines, and am fortunate to have a wife who is an exceptionally fine cook (and has many other fine qualities as well!). And I read voraciously, from serious non-fiction to literary novels to science fiction which is often thought-provoking -- but is often just good escapism when I need to get my brain out of high gear for awhile.

Last            Next


Discuss this interview or anything else to do with science and technology on the TRN Forum.

TRN needs your help. Please click here for details.


Advertisements:




News:

Research News Roundup
Research Watch blog

Features:
View from the High Ground Q&A
How It Works

New: TRN's Jobs Center

RSS Feeds:
News  | Blog  | Books 



Ad links:
Information on RFID, VoIP, VPN, ZigBee and more

Buy an ad link

Advertisements:







Ad links: Clear History

Buy an ad link

 
Home     Archive     Resources    Feeds     Offline Publications     Glossary
TRN Finder     Research Dir.    Events Dir.      Researchers     Bookshelf
   Contribute      Under Development     T-shirts etc.     Classifieds
Forum    Comments    Feedback     About TRN

© Copyright Technology Research News, LLC 2000-2006. All rights reserved.