Sometimes one must risk error
in order to find truth.
- William James
I think of philosophy as the attempt to grapple with a family of deep and significant questions. When I call these questions deep and significant, I mean that their putative answers promise either justify or to change our picture of the universe and our picture of ourselves as human beings, as thinkers, as moral agents. To this extent I think that most people have probably dealt with philosophical questions at one time or another, and many of the sciences of human behavior address philosophical issues as well. Philosophy as a discipline also approaches such questions, and what makes academic philosophy so powerful, in comparison with the armchair musings of the average person, is that it has a history of well-known questions, well-argued positions, and established methods to draw upon. What makes the philosopher valuable is that he has mastered this history and methodology and is able to bring it to bear on current issues and debates. The terrain of philosophical thought is strewn with pitfalls and obstacles, and the professional philosopher has gone over much of this terrain many times.
But this very advantage can also be philosophy's greatest disadvantage. There is a tendency, especially among professional philosophers, to identify philosophy with this history and these methods for approaching those questions, arguably to the point where the deep and significant questions themselves are upstaged. I say these things because this essay will address questions that I take to be central philosophical questions, but will do so in a way that sometimes departs from established philosophical methodology. For example, nothing prototypically philosophical will appear until Chapter Seven. The first five substantive chapters will deal instead with issues in control theory, sensory-motor integration, psychology and linguistics. This divergence from philosophical custom is not complete, however. Chapter Seven and parts of Chapter Five are plausibly locatable within current philosophical concerns and practices.
I depart from established philosophical practice (and from standard cognitive science practice, and from psychological practice, etc.) because my purpose is to examine anew the nature of human thought and cognition. I hope to sketch a view of cognition that is applicable to many levels of the cognitive enterprise, from the vestibulo-ocular reflex to choosing a retirement fund, a view which makes sense of certain metaphysical doctrines about mind and reality as well. It will be my contention that if these different levels of cognition and mind are examined in the right way, with the right sort of theoretical machinery, a coherent picture can be seen where before there were only fragments separated by noise. Thus my project is inherently and essentially cross-disciplinary. I do not think that the picture I am peddling could be made to look nearly as attractive if confined to just one area of research, such as developmental psychology, in much the same way that one cannot make a Necker Cube flip (or even be a cube) by staring exclusively at one vertex. To continue the analogy, I hope to take the reader through a number of vertices of cognition, in order to get the terrain to 'flip' into a new, and hopefully more enlightening, configuration.
The danger in any such pursuit is that as scope increases, resolution decreases, and a project as bold (foolhardy?) as this could easily degenerate into vagueness and hand waving. I will attempt to avoid this outcome in two ways. First, at several points I make use of the theoretical machinery I develop to address specific narrow issues. For example, Chapter Five will address issues in the development of the theory of mind, and Chapter Six uses that apparatus in the context of language use to provide insight into wh-extraction and heavy-NP shift. These diversions serve both to sharpen the theoretical apparatus and to demonstrate that it can cope with concrete issues in a fresh and illuminating way.
The second, and more important, way in which I will avoid the charge of armchair hand waving is by assimilating substantial results from researchers in the various disciplines considered. Each of these disciplines boasts a number of important members who view their work as a reaction against the orthodoxy, and whose work is compatible with the framework I develop. This includes, for example, the treatments of linguistic phenomena found in Langacker and Fauconnier, the developmental psychological work of Karmiloff-Smith and Perner, theoretical neurophysiology of motor control in Kawato and Ito, theories of neurophysiology of perception and imagination as in Llinas and Mel, etc.
To the degree that I am able to assimilate these various results, the original contribution of this work consists in its construction of a unifying meta-theory that promises to provide some rhyme to others' reason.
1.2 Brief history of this project
In the winter quarter of 1992 I was involved in the neuropsychology lab of Vilayanur Ramachandran. At the time the lab was focusing on aspects of neural plasticity, and of the phenomenon of perceptual 'filling-in'. The relationship between filling-in and imagination I found intriguing, but I was not satisfied that any of the explanations or theories with which I was familiar were adequate. At the same time, I was interested in reasoning, and especially long-term planning. As a student of Paul Churchland, one is obligated to take connectionism quite seriously, and yet one area where classical models of cognition seemed to have an intuitive edge was reasoning and planning in the absence of action.
At the same time I was taking Robert Hecht-Nielsen's year-long course in neurocomputing. In the second quarter of this course, we were covering recurrent networks, and the topic turned to using such networks for control purposes. The subject of one lecture was applications to model-based control. This is a technique whereby a model of the controlled system is used in various ways to help the controller control the plant. Hecht-Nielsen's hour and a half lecture, though focused on engineering applications, seemed to me to provide the key to understanding many of the phenomena connected with brain function that I had been wrestling with.
I decided that this would be my dissertation project, to try to show how model-based control could be used by the brain, at a number of different levels, to solve a number of problems. Over the next few weeks, I had worked out rough provisional ideas about how this strategy might be applied to various levels of brain function, specifically in motor control, mental imagery, planning, and language comprehension. The problem with the project, however, was its scope. In order to demonstrate that a certain control strategy is in use at a number of levels of brain organization, one must necessarily learn about, in a fairly detailed way, these levels of brain function and organization. I was confident, however, that I could at least make a plausibility argument in each of the domains under consideration, even if the prospect of detailed and convincing analyses was unrealistic. The promise of a degree of conceptual unification would offset the lack of extended, detailed argument within any given area.
I began my research in motor control, and found quite quickly that a number of researchers within that area were in fact using one or another variant of this control strategy to explain certain phenomena. They were, in effect, doing my work for me, and in a more complete and rigorous manner than I would have been able to myself. I had the luxury, then, of assimilating their work into the overall framework I was trying to defend. The good news, however, did not stop with motor control. In fact, in each area I examined, there turned out to be a number of researchers whose work was compatible with the framework I was developing. Johnson-Laird's 'mental models' account of reasoning and language comprehension, Langacker's Cognitive Grammar, Mel's simulations of visual imagery, and Fauconnier's 'mental spaces' theory of opacity are several examples.
These discoveries changed the nature of the project slightly. Now freed from the obligation to do a tremendous amount of independent research within each of these fields, I could construct instead a more detailed overall model, and try to show how the work of these researchers, in these different fields, fit together within that model. And that, essentially, is this project. Its scope is considerably larger than is typical for dissertations, but that is necessitated by the thesis, which is that a certain sort of control strategy is applicable to understanding brain function at a variety of levels of organization and complexity. And though it is false to say that there is no original contribution in detail to any of the specific domains (there is, for example the treatment of heavy NP-Shift and c-command in Chapter Six, and the solutions to Putnam's and Burge's content attribution problems in Chapter Seven), the bulk of the original contribution lies in providing a framework and common vocabulary within which the work of researchers in different disciplines can be seen to cohere.
1.3 Survey of the Current Project
Having made these brief introductory remarks, I think that the most useful way to flesh out my approach to the questions raised is to simply provide a brief summary of the entire project.
Chapter Two: Emulation and Control
I begin by introducing the notion of emulation, which will serve as the foundation for the entire project. It is a notion I have more or less commandeered from control theory, and is similar to that discipline's notions of a system identification or forward model. At a first pass, an emulator is an entity that mimics the input/output operation of some distinct target system. For example, a flight simulator is a sort of emulator, as it closely matches the input/output operation of an aircraft, where inputs are command signals from joystick and throttle, etc., and the outputs are instrument readings and visual scene (cockpit window in the case of real airplanes, generated graphics in the case of the simulator).
In this chapter I develop a thought experiment that has a human
operator in charge of controlling a large robot arm for the purpose
of performing grasping operations. Using this example, I will
make the important distinction between inverse and forward mappings,
and try to show the many interesting uses that forward models
(emulators) can have in such control problems. For example, if
one operates the real target system and the emulator in parallel,
one can use the emulator's outputs as a check on the real system's
sensors and instrument outputs. Furthermore, one can run the emulator
by itself (without the target system) to perform 'what if' experiments
before executing them with the target system.
Chapter Three: Perception, Imagery, and the Sensorimotor Loop
This chapter will review some of the more substantial evidence that the brain really does employ emulators for a variety of purposes -- specifically, in motor control, mental imagery and perception. The focus is on these 'lower' cognitive functions for two reasons. First, most (but not all) of the hard evidence for the explicit use of emulators comes from these areas, as they are much better understood neurophysiologically than 'higher' cognitive functions. Second, I want to make the case that phylogenetically higher functions can be seen as adaptations of these 'lower' functions, and thus that the link between Sensorimotor integration and cognition is tighter than might have been supposed.
The first example will be from motor control. Specifically, making use of the work of Ito and Kawato, I will examine certain circuits in the cerebellum whose purpose seems to be the emulation of aspects of musculoskeletal dynamics. The control of very fast voluntary movement faces the difficulty that proprioceptive information from the controlled periphery is transmitted relatively slowly (limited by axon conduction velocities) back to the motor centers, and if motor signals are generated on the basis of old information, oscillations and instabilities can develop. However, an emulator, using efferent copy signals, can provide immediate 'mock' proprioceptive information, information which, if the emulator is a good one, will be the same as the real proprioceptive information that the periphery will eventually generate.
I will then turn to some work in mental imagery which suggests
that such imagery is, in effect, simulated perception. I will
show that the execution of this simulation requires the use of
emulators. I will discuss a computer model of mental imagery done
by Mel, which explicitly employs emulators.
Chapter Four: Emulation, Representation and Learning
Emulators, as I develop the notion, are neither connectionist nor 'classical' architecturally, though, of course, an emulational architecture can be implemented by either connectionist or classical hardware. In this chapter, I focus on issues in development and learning, and attempt to account for some interesting data in terms of the construction and articulation of emulators. An emulator mimics the input/output function of some target domain, perhaps the external world (we can thus have a 'reality emulator' in our heads, a sort of world model). But there are multiple ways to emulate something. Most basically, one can construct a lookup table of past input/output instances. This might be accurate, but it will generalize poorly, if at all, to new cases. What one would like is an analog model, where different aspects of the target domain are 'separated out' and can be treated more or less independently from other aspects of the model (articulants, as I shall call them).
This, I think, has interesting connections to the work of Annette Karmiloff-Smith, who offers theories of psychological development which she takes to be neither connectionist nor purely classical. I hope to show how her results fit naturally into the emulational framework (hereafter ETM, for Emulational Theory of Mind). The key phenomenon here will be that of Representational Redescription, a process whereby a representation or capacity gets further articulated in ways which make it more generally applicable, for example to novel situations.
Chapter Five: The Mind/Body Solution
A good model of the behavior of physical objects will make appeal to inertial forces, frictional forces, masses and a host of other ideas -- perhaps in a not very well-articulated manner, like Aristotelian physics, but possibly fairly successful within a certain range. A giant lacuna in such a model will be the behavior of animals and persons, who manifest self-initiated movements, and whose behaviors are straight-forwardly predictable via intuitive physics only occasionally. A solution to this problem is to treat certain physical objects as subject to representational/psychological description as well as physical description.
Drawing on the work of Josef Perner I will argue that this ability results from a recursive application of emulation -- that is, certain entities in the internal emulation are themselves represented as capable of internal emulation of some sort, they are represented as representers.
Chapter Six: The Grammar of Thought
If the brain does in fact operate by heavy use of emulators, many of which model the dynamics of the external world via independently addressable articulants, we might be able to account for the semantics of natural language in procedural terms. That is, expressions of natural language can be viewed as instructions, or procedures, for constructing an internal model or emulator.
This chapter will apply ETM to some linguistic phenomena, specifically wh-extraction and heavy NP-Shift. The idea here is that if natural language works by constructing emulators, then cognitive constraints on what can and cannot be maintained as an articulated emulator ought to be reflected linguistically, as, for example the ungrammaticality of certain sorts of sentence. And alternately, information about what sorts of sentence can and cannot be processed ought to provide clues as to the ability of the brain to maintain certain sorts of emulators.
My work in this chapter will be greatly expedited by the use of Ronald Langacker's Cognitive Linguistics framework, which provides a useful vocabulary for dealing with linguistic phenomena (and which I take to be entirely compatible with ETM), as well as a theory of attentionally mediated segmentation developed by von der Malsburg.
Chapter Seven: Semantics
This chapter begins the serious dive into substantive philosophical issues. I have, in previous chapters, constructed a theory of cognition that places emulators at its core. It was taken for granted that the target system that the higher-level emulators emulate is the real world, or, more accurately, aspects of the real world. The question then arises, are emulators best viewed as neurally implemented models, whose articulants get their meaning because they stand for some entity in the target domain (the world)? Or is the semantics of emulators best viewed as a sort of conceptual-role semantics, the meaning of the separate entities and articulants of the emulators being invested with meaning as a function of their dynamic interaction with other such entities? The point of this chapter will be that the story must be much more complex than either of these two alternatives -- each is, in a sense, both right and wrong.
The investigation begins by exploring some of the more obvious theories of content, causal theories and conceptual role theories. The point will be to get a feel for some of the more important requirements that a theory of content must satisfy, and to get a feel for some of the ways in which different theorists have tried to satisfy these constraints. With this background established, I go on to introduce Rob Cummins' Interpretational Semantics, which, though not correct, provides a convenient starting point for constructing a more adequate semantic theory that makes central appeal to emulation and interpretation.