The Evolution of Language -- III
I recall reading with fascination how frogs go about catching flies so easily. The claim was made that the neural mechanisms associated with frog eyes preprocess visual data in such a way that"the eye speaks to the brain in a language already highly organized and interpreted, instead of transmitting some more or less accurate copy of the distribution of light on the receptors."1 The result is a rapid determination of what is food, what might be a danger, and what is of no interest. This paper, despite its age, still seems to be one of the most commonly cited papers in the Science Citation Index, Wikipedia says.
It would be surprising if we humans did not have specialized neural structures that facilitate language learning and the use of language given that children learn languages relatively quickly without much teaching and adults use language effortlessly and with great speed. The turns of conversation commonly overlap or occur with minimal or even no gaps between them. This means that we go from hearing and understanding someone else's utterance to planning and then producing one of our own at warp speed.
We humans, like frogs, have specialized neural structures, including several structures that are associated with linguistic processing. Most of you will have heard of Broca's Area and Wernike's Area, both of which are located in the left hemispheres of the brains of most people and, though these areas are in different locations in the left hemisphere, they are connected via the arcuate fasciculus. Wernicke's Area is located toward the back of the left side of the brain and is dedicated to auditory processing. Broca's Area is located forward of Wernicke's Area and is concerned with language understanding and speech production. It is clear from all the research done over the years on language deficits due to brain damage that these separate areas exist and more recent studies employing noninvasive magnetic resonance imaging have provided further evidence. However, it is not yet possible to associate specific linguistic functions associated with understanding and producing speech with specific locations in the brain. One of the reasons is that MRI studies of the brain require the subject to be still but one cannot speak without movement. Having subjects whisper is one way to get around this.
It is no accident that the parts of the brain dedicated to language are on the same side of the brain as those parts that are related to thinking. I have argued that cognitive development is independent of language development, one of the reasons being that children evidence an ability to think before they acquire the ability to use language. Children learn to reach out for things as a way of requesting/demanding them, for instances, before they can make linguistic requests. And, they evidence the ability to make offers nonverbally (by holding an object out for someone to take it). But, the use of language and our ability to think are very intimately connected and one can argue that thinking about abstract concepts depend critically on the ability to define or characterize them in language. Though there is no way to prove this, I suspect that our brains were preadapted for language before our larynx descended in such a way as to favor the production of linguistic sounds. We had reasons to be able to think and communicate using sound before full blown language evolved.
I think that efficiency in language understanding and production has favored the development of languages that are computationally efficient. Chomsky put us on a path of studying grammar from a mathematical perspective, with grammars being represented by some kind of formal system. He also took great pains to disassociate his notion of "sentence generation" from the notion of "computation." I would like to suggest that we take precisely the opposite view. Those (like me) who worked within Chomsky's original paradigm also took a modular view of language wherein each different kind of linguistic phenomenon was relegated to a different module and this, of course, resulted in the need to study the interconnections between "adjacent" modules. So, we had phonology in one module and morphology in another and also had the field of morphophonology to account for their interactions.
In working with a computer scientist,Terry Patten and his student Barbara Becker,1 I was introduced to the notion of expert systems in which compiled knowledge, rather than reasoning from "first principles," is employed in inferencing. This widely accepted approach within computer science in the study of artificial intelligence has proved useful in understanding, among other things, the reasoning of physicians who on noting a few symptoms of an illness and facts about a patient seem very quickly to come to a diagnosis and possible treatment. In each case, the doctor could presumably have reasoned from basic principles of medicine and pharmacology to the same result but, instead, the reasoning is in some sense "short-circuited," thanks to the existence of his or her compiled medical and pharmacological knowledge.
The problem that interested Patten, Becker, and me is that linguistic processing -- from hearing someone say something to replying -- proceeds so rapidly that it is difficult to see how this very rapid processing could happen if one were to assume a modular approach to grammar, conceived as a formal system, which our utterance understanding and production routines must somehow "consult." This would be a quintessential "reasoning from basic principles" approach to the problem. Instead, we opted for the view that a proper study of grammar must be embedded withing a theory of language understanding and production using some computational paradigm.
Linguists are language analysts -- language parsers, if you will -- and it has been very natural to us to assume that we hear linguistic sounds, map these into sound sequences constituting morphemes and map these into words and map these into phrases and map these into syntactic structures of some type, associate conventional meanings with these syntactic structures, and then determine the meaning of the utterance in context (i. e., its significance). This approach tacks pragmatics on at the end of language parsing/understanding as a kind of afterthought (pun intended).
The problem with this approach is that it fails to appreciate two facts. The first is that language understanding is driven in part by contextual cues, i. e., by socio-pragmatic considerations) and that socio-pragmatic considerations drive utterance production, the other side of the language processing coin. Once a "ride request" is initiated (see the prior blogs in this thread) and recognized as such, addressees come to have expectations as to what others will say before they speak -- not the exact words, of course, but the gist of what they might say. Many misunderstandings in conversation2 reflect the fact that we commonly have such expectations.
That socio-pragmatics drives language production cannot seriously be doubted. Speakers do not start a reply to what someone has said with a "logical form" or whatever else passes for a representation of conventional meanings these days. In a context-rich interaction, where it can be assumed that someone who owns a car and is willing in principle to provide one a ride somewhere and seems also able to do so, we might say any one of a very large number of things to request the ride ranging from, 'Could you give me a ride to school?,." "Mind giving me a ride to school?," Hey, I need you to take me to school," etc., the specific choice reflecting style (formal, informal, intimate, etc.), politeness (reflecting our social relationship), and register considerations. Each of these utterances has a different conventional meaning, so, obviously, we cannot assume that we start off our utterance with selection of some "logical form." Instead, we start with what we might call the "gist" of what we mean to communicate.
We are all familiar with locutions like "the gist of what she said was..." and I take that notion of "gist" quite seriously as the (pragmatic) meaning component of conversational interactions. Going much further into this would entail replicating a chapter of of my book,3 so let me just say here that the gist reflects what the speaker means to contribute to the work of an interaction. At its most elemental level, the gist of any ride request is that the initiator of the interaction wants a ride somewhere. Patten, Becker, and I argue (a view elaborated in my Cambridge Press book), that we combine the "gist" of what we mean to say with choices of appropriate style, politeness, and register features and those somehow get mapped into an utterance with a particular linguistic form. Our claim is that there exists a pragmatic stratum, a kind of hierarchical network, that interfaces with some logico-grammatical system which itself is a kind of hierarchical network with integrated, rather than modularized, linguistic features. The logico-grammatical system we employed was a systemic grammar, something that would have been anathema to me during most of my pre-Patten days.
So, what does this have to do with the evolution of language? My last blog ended with the claim that "highly specialized perceptual and sound production and linguistic-cum-cognitive abilities would have evolved." I somehow doubt that these highly specialized "linguistic-cum-cognitive" abilities are as specialized as the perceptual mechanisms found in a frog's eye but I would like to suggest that this is the right direction to look for answers.
In my view this dictates a view of the evolution of language in which our brains were preadapted for language via our development of an ability to correlate and compile contextual information, and practical knowledge in a way that allowed us to draw inferences from stimuli and respond very rapidly to them. As our ability to speak and engage in linguistic hearing emerged, it is hardly surprising that we developed specialized areas of the brain for dealing with both linguistic hearing and language processing. And it is not at all surprising that linguistic hearing and language processing be located in different areas of the brain given the fact that hearing per se would have come very much earlier than the evolution of language.
1Terry Patten, Michael L. Geis, and Barbara D. Becker:
Toward a Theory of Compilation for Natural Language Generation.
Pages 77-101
2An extreme case of the use of expectations in language understanding occurred in an exchange I had with my wife before we married. Having visited my wife's house to let her dog out when she was away from town, I noticed that her grass had grown. I also noted that she had seeded an small patch of her lawn. When she returned, she came over to my place and said something like, "The grass has sprouted and I need to take the netting off," but I heard, "The grass has grown and I need to mow it." After I realized that I couldn't follow her next couple of utterances at all, I realized where I had gone wrong and backtracked to her initial utterance. This is what I mean by socio-pragmatic expectations (which include background assumptions and contextual information) driving utterance understanding, or, in this case, misunderstanding.
3Check out my blog The Meaning of Meaning. The concept of "gist" is essentially the same as what I call utterance significance in this blog.
Tags:
3 Comments:
Sorry, been a bit preoccupied or would have commented earlier.
Things that occurred to me while I was reading:
How much difference is there between an infant or young child reaching for something it wants (or offering something to another person) and a dog straining at its leash or bringing you a ball or something when it wants to play fetch?
The parallel development of areas specialized for language (Broca & Wernicke areas) across individuals would seem to indicate a genetic involvement, but is there any way (at present) that we can distinguish this from simple non-genetically-determined convergent development resulting from functional/processing needs? If the developing limb of a fetus is damaged or defective, the fetus does not develop a new one to replace it. However, if areas of the brain where language functions usually reside are damaged, other areas are coopted to take over language processing.
(Scattered again, I know.)
11:44 AM
Hi! I love your posts & want to say, "Reasons Greetings!" I hope you're enjoying the holidays. :) <---that's a sincere smiley face, not a snide one
1:06 AM
i love your blog. it's useful for my presentation.
thanks
7:44 AM
Post a Comment
<< Home