kitchen table math, the sequel: linguistics, sign language and writing

Saturday, August 8, 2009

linguistics, sign language and writing

With my previous post on a "phonetic basis" to language, I should make it clear that I consider signed languages to be close analogues to spoken languages. (From what I know, this is a commonly-accepted fact among linguists.)

However, one thing that linguistics hasn't really provided a unified theory is explaining cognitively what happens for sign language -- part of the problem is that a lot of people still perceive sign languages as proxies for spoken language, or an alternative to speaking, that is, taking signed systems to be like writing systems. This doesn't bode well for scientific attention or funding...

We know sign language systems are very similar to phonetic systems, and in fact can be analysed like a phonetic system as far as dynamics like sound change, lexical diffusion, acquisition, etc. goes, and signs can be broken down into equivalents of phonemes, etc. -- the most pertinent difference generally is that a signed language is often capable of much more co-articulation than a spoken one. (Co-articulation is very common in all spoken languages, but we can usually co-articulate sound elements only if their places of articulation are far enough apart. The /t/ in "tea" for example, is a co-articulated phoneme, composed of a consonant not unlike the Spanish [t] and aspiration [h].) Sign language is not ideographic either. The fundamental permitted gestures in a sign language do not represent "ideas" -- like phonemes, by themselves they are often meaningless, until combined together. The gestures obey the recombination principle -- if you make one gesture and then another to form a word, that word is not necessarily releated in meaning to either gesture, much like "party" is not a semantic fusion of the concepts "par" and "tea".

Signed languages are so similar to phonetic languages that they are often used as alternate means to research child language acquisition, especially in crosslingual settings, often because it's very common for the parents of deaf children not to be native speakers/signers of the language the child learns to speak/sign. (The other research sources that have been useful in the field are migrant families and pidgin-creole communities.)

Similarities include a critical window for sign language -- children who learn it early invariably become fluent signers, excepting cases like those with neurological disorders; children who hit puberty before learning to sign generally never become native-level signers except with intensive study. And even then, errors are frequent and signing is more jerky and less fluid.

A deaf infant, instead of babbling, will instead do the equivalent with his hands. In fact, the timeline for sign language acquisition among children is so similar to spoken language acquisition -- e.g. we can expect a six-year-old signer to be very fluent, replete with different inflections, conjugations, declensions, irregular constructions and syntactical phrase-shifting -- that biologically the mechanism for sign language acquisition must be very similar to the mechanism for spoken language acquisition.

Even signed languages obey all the rules of Chomsky's Universal Grammar. I won't go into hardcore syntax here, but the idea is that there are universal rules that govern language, where expressions can be analysed as sets of verb phrases and noun phrases, with embedding rules and phrase-order rules, and in the way the order of a phrase can be shifted in different situations. An example that gets used frequently for English-speaking audiences for the principle of shifting word orders is when you ask a question -- which verb and subject do you invert? When being asked to turn a sentence with a relative clause like, "The goat that is in the garden is eating the flowers" into a question, invariably all the young children fluent enough to understand the sentence invert the right phrases. They do not form constructions like "goat the that is in the garden eating the flowers" or "is the goat that in the garden is eating the flowers?" -- and Chomsky argues this must be a consequence of a natural rule of universal grammar, known as the argument of the poverty of the stimulus. For one, children learning language-specific rules generally demonstrate their acquisition of the rules in stages, just as we regularly observe young children saying "he hitted her!", "I bringed juice for doggie" or "she giggled me!" -- evidence that they haven't completely learnt the rule. But we observe no children making wrong inversions. There are other arguments too -- like how children would learn such an elaborate algorithm, and use it so fluently and automatically?

Teenage and adult learners of sign languages, however, generally commit many violations of universal grammar, probably because they are not using their L1 cognitive machinery to process the language, which would automatically organise words into phrases (e.g. mentally organising like "goat that is in the garden" into a noun phrase, or NP) and clauses -- just like learners of spoken languages. The ability to analyse a sign language along noun-phrase / verb-phrase lines isn't necessary for an ideographic system; try telling a linguistics student to find the noun phrase or a verb phrase in a picture, painting or musical score and you will get protests about the silliness of the effort. Computer language isn't organised along VP/NP lines.... and not even veteran computer programmers can "think" in the computer languages they know well; it's always "code" and generally not a very effective medium for thought for humans, even though computers do very well, performing vast amounts of symbolic operations in it.

Sign languages can be generated spontaneously, and are capable of undergoing a process called "creolisation" like spoken languages. It's the spontaneous origins of many sign languages that make their study useful to many linguists working in spoken languages -- it sheds light on the origin of language in general. Children, when exposed to a stimulus composed of ungrammatical or pidgin language (e.g. migrant parents on a plantation, a colony composed of many ethnicities speaking each other's languages brokenly, non-native parents who sign imperfectly), without a good competing grammatical stimulus, will spontaneously generate corrective rules. The resulting language they produce obeys a grammar, complying with all the rules of universal grammar -- and is as complex and full-fledged as any other language. It's a good example of how grammar, though it may be culturally transmitted, is a partial consequence of universal biology. However, for spontaneous language generation to occur, you need at least two people -- a baby raised by wolves will not generate his own language, while there exist many documented cases of a secret grammatically full-fledged language, spoken between two close twins.

Spontaneous language generation can fuse different source languages -- the most prominent examples are creoles like Haitian Creole, combining elements from French, Spanish, African and indigeneous languages into a new grammatical system; I speak a creole called Singlish, with elements from Hokkien/Teochew Chinese, Malay, English, among others. With signed languages, the most prominent example is the emergence of a single "Nicaraguan Sign Language" (LSN) from the pooling together of home sign after the first schools for the deaf were opened in Nicaragua in 1979. Suddenly, previously isolated deaf children were pooling signs on playgrounds, schoolbusses, classrooms, and then becoming fast friends and signing to each other in each other's homes... However, this LSN was like the pidigin language that parents of immigrant children speak, a mixture of the various languages of their new environment; the conventions were often irregular, with jerky, non-fluid signing, and trends rather than real grammatical rules. The younger children exposed to LSN weren't content; they took it and transformed it into Idioma de Señas de Nicaragua (ISN), spontaneously standardising many of the grammatical rules and making many patterns that were previously mere trends obligatory. The new language also simplified many of the common constructions like a native language would, and had all the classic signs of assimilation rules found in natural spoken language. How a group of 4- to 8-year-olds achieves such efficient consensus on a matter like creating a unified "fusion" language has to be pretty fascinating if you ask me; the process is probably unconscious and memetic. I imagine a single child having a particularly ingenious way of signing something; some other child sees it and with excited eyes gestures something to the effect of "oh that's such a cool sign!" and then soon everyone's copying it. It's such a good meme that it outcompetes all other signing patterns for the same concept or grammatical rule; some signs occupy a slightly different niche (slightly different connotations, etc.) and they survive, becoming the signed equivalents of synonyms. Repeat for each convention. The process of creolisation is effectively identical for speaking children, only with spoken words.

There's a good argument to make for the idea that objectively, all languages have equal difficulty to the infant, and ultimately have all the same complexity and are biologically constrained to be that way. Latin has a mediumly-complex inflection system that tortures high school students, but on the other hand it only has a collection of word order rules that are not very strict and relatively simple phonology; Chinese languages have little inflectional grammar but have complex syntactical and phonological rules; when English dropped the sheer majority of the gender, case and conjugation systems, it converted other previously-normal words into grammatical auxiliaries and suddenly gained a complex word order system, and when English dropped vowel length distinction, it also started recognising a large degree of other vowel phonemes that previously didn't exist, and as such, English has a large amount of vowels compared to most other languages. English vowels are a torture for Arabic speakers, who have only 3 vowel qualities, yet an abundance of exotic consonants. In fact a general rule I'd like to propose is when a language is undergoing a change that will simplify or add complexity to its grammar, it must also undergo a complementary change that will compensate for the added or removed complexity in the opposite direction -- in a different area of course.

So that would imply signed languages must be as equally complex as spoken languages -- just extracting grammatical information from the secondary visual cortex rather than say, the auditory cortex. Curiously, the frontal cortex, auditory cortex and visual cortex converge roughly around the Sylvian Fissure, along which are located several important sites for language processing (though recent research suggests we've misidentified the actual locations of Broca's and Wernicke's areas -- partially because of the imprecision of the 1800s). Research into signed languages may have implications for phonics and children who learn spoken languages, and learn to read and write in them, because it shows the brain can process visual information grammatically. It is not the case however, that the visual information in sign language can be processed as a direct representation of symbolic thought and ideas: damage the same language centres that speaking people use, and sign language ability goes with it. In fact, Google tells me there's some good research out there on "sign language aphasia".

However, there's some further complexity for neuroscientific research into linguistic processing and the shared machinery that spoken and signed languages would both use because of the concept of neurplasticity. If you damage or lesion the areas commonly regarded as Broca's Area and Wernicke's Area in a child that is young enough (e.g. a large fall at two weeks of age), the child will probably grow up into a normal, healthy, fast-talking young child. Why? The appropriate neural tissue couldn't develop in the place it usually develops, so it develops somewhere else, sometimes in radically different areas (and later the remnants of these lesions show up 30 years later on medical imaging scans in an otherwise neurologically-normal patient). But neuroplasticity is also probably part of the mechanism for native-level sign language processing; if a language centre is supposed to be receiving signals from the auditory cortex but isn't (or isn't receiving enough signals, as with a damaged ear or damaged auditory cortex), some neuroplasticity mechanism probably has it adapt to processing from the visual cortex instead, and growing and sending out networks and pioneer neurons accordingly. Notably, the degree of neuroplasticity is greatest in children -- if being fluent in a language requires your language processing centres to send out pioneer axons (or even new neurons or networks) to the appropriate auditory or visual cortex and vice versa, and you're 13 years old ... well, hard luck.

10 comments:

Anonymous said...

Radical,

Do you know sign language yourself? I've studied sign language casually for four years now and some of what you're saying doesn't fit with what I've seen of sign from both hearing and deaf users. Where did your information come from?

I'm not trying to be confrontational. I'm just being my normal, curious self who wants what I'm told to make sense with what I know or have experienced.

The fundamental permitted gestures in a sign language do not represent "ideas" -- like phonemes, by themselves they are often meaningless, until combined together. The gestures obey the recombination principle -- if you make one gesture and then another to form a word, that word is not necessarily releated (sic) in meaning to either gesture, much like "party" is not a semantic fusion of the concepts "par" and "tea".

This isn't true, from what I know of sign language (ASL). The word "today" for example, is signed by combining the sign for "now" with the sign for "day." The word "today" is semantically related to the concept of "now day." Off the top of my head, I can't think of any multi-part signs, where each part of the sign also has a meaning of its own, where the multi-part sign is unrelated to the meaning of its constituent parts. Can you give me an example or two?

Teenage and adult learners of sign languages, however, generally commit many violations of universal grammar,

Where did you find this information? Again, I'd like what you're saying to line up with what I know in real life. ASL (as distinct from Signed English) is exceptionally fluid in its word order.

Okay, you've awakened my curiosity and I want more clarity about what you've said. I'm also curious where you found the information about adult learners of sign being less fluid than those who learn it as children. It's certainly true of me, but I'm far from fluent. I'm thinking of the many deaf who don't learn sign language until they are adults, maybe by going to Gallaudet for college. Is it true that they'll never sign as fluently as those who started as children?

le radical galoisien said...

The bulk of what I know about the cognitive science behind sign language comes from Steven Pinker (author of books like "The Language Instinct"), other books, profs, and teachers. I used to know American sign language in elementary school; but then I stopped using it after fourth grade and my ability rapidly dropped off after that. (It's also around the same time when I lost the ability to speak Chinese.) My mother is also a signer of Chinese Sign Language (which is amazingly inflected and definitely not similar in grammar to the Chinese languages.)


"The word "today" for example, is signed by combining the sign for "now" with the sign for "day." The word "today" is semantically related to the concept of "now day."


Well yeah, that's a compound word. "Today" in itself used to be seen as a compound word for Old English speakers. Compound words are perfectly possile in any language. In fact, you see this frequently in Mandarin Chinese. "Renren" means "everyone", but it's a compound reduplication of the word "ren" (people) ... the process is rather analogous to derivation.

The recombinatorial principle (one that distinguishes sign language from a hypothetical ideographic system) states that altering combination and sequence of phonemes potentially alters the meaning of the word drastically, and that you have meaningless gestures that only make meaning when put together. Some meaningless gestures are derived from simple pictorial gestures, much like some phonetic radicals and characters are derived from simple pictorial characters, because the sound element / gesture is being borrowed, without the meaning necessarily being borrowed. As an English example, the word "a" is composed of one phoneme, but the phoneme itself is meaningless -- its being found in other words like "America" doesn't mean those words have ideographic elements of "singular indefiniteness" to them. The words "tale" and "late" in English both have the same phonemes, but are completely different. In any sign language, you will have words that have the same gestures but arrange them differently to make drastically different words. (And at the same time, many gestures can be co-articulated, and a co-articulated gesture, depending on the sign language used, can often be distinguished from a sequence of gestures.)

le radical galoisien said...

"Can you give me an example or two?"

The best place to start is Saussure's principle of the "arbitrariness of sign" -- this is what distinguishes human language (spoken and signed) from a hypothetical ideographic system. In fact I argue that human language is phonetic and that sign language is phonetic, only that vocalisations have been replaced with gestures and places of articulation have been expanded. Would this really be the case in an ideographic system?

In fact, linguists call these fundamental gestures of a sign language "phonemes" -- I think they used to be called "cheremes" or something like that, but as linguists really studied sign languages, they found that these gestures behaved so identically to human phonemes that they just started calling them phonemes.

Just as spoken phonemes can be broken down into separate elements of places of articulation, primary manner of articulation (voicing), secondary manner of articulation (aspiration, vocal registers, nasalisations) and so forth, so can fundamental gestures. There are gestures that are exactly like one another except differing in one element (like place of articulation, movement style, hand shape, etc.), forming a different phoneme that can alter a word drastically, much like "pack" and "back" are drastically different words, even though /p/ and /b/ differ only by manner of articulation (voicing). By changing a single element, you can create a drastically different phoneme, and therefore create something linguists call "minimal pairs".

Of course, there are going to be areas where changing one of the elements of a phoneme results in a word that is related -- as I recall the difference between "mother" and "father" in ASL differs only in place of articulation (where the hand is placed) -- but then this is like English where "papa" and "mama" differ only by manner of articulation (/p/ is an aspirated bilabial stop; /m/ is a nasal bilabial stop). And then of course, there are combinations of elements that do not make sense. If you fricativised the stops to make say, /f/ or /v/, "fafa" and "vava" do not make any sense in English and I suspect in ASL merely changing one of the fundamental elements of a fundamental gesture in a word will make the word nonsensical.

This is on a phoneme level. There are a lot of minimal pairs in sign languages, differing only by a single element. If you take the gesture for "mother" or "father" but change the place of articulation to the chest (did you know Wiktionary has an ASL dictionary? see http://en.wiktionary.org/wiki/5@Sternum-FingerUp_Contact), you make "fine", which is totally unrelated to the words mother and father, much as "fine" itself is totally unrelated to the word "vine". (/v/ is exactly like /f/, only that it's pronounced with the voice chords vibrating.)

le radical galoisien said...

"Where did you find this information? Again, I'd like what you're saying to line up with what I know in real life. ASL (as distinct from Signed English) is exceptionally fluid in its word order.

Okay, you've awakened my curiosity and I want more clarity about what you've said. I'm also curious where you found the information about adult learners of sign being less fluid than those who learn it as children. It's certainly true of me, but I'm far from fluent. I'm thinking of the many deaf who don't learn sign language until they are adults, maybe by going to Gallaudet for college. Is it true that they'll never sign as fluently as those who started as children?"


This information specifically is from Steven Pinker's book "The Language Instinct", which makes a good case for the biological basis of human language. Steven Pinker works with the cognitive sciences and linguistics fields at MIT. His book really gives a good explanation of universal grammar to laymen -- I've only tried to condense his explanation here.

Tuition, (single) parent and career dictate that I can't really study a sign language at the moment, but I'm really interested in sign language principles because research into them also promises to shed light on the biological basis of human language in general. (One of my ultimate dreams is to map the human language pathway, but that may not be achieved within my lifetime.)

Stokoe at Gallaudet is actually responsible for the foundation of what linguists know about sign language -- it was he in fact who first applied linguistics to sign languages, and used Bloomfield's structural linguistics method to ASL. It's why linguists are so confident about many of the cognitive similarities between sign languages and spoken languages.

There's good experimental basis for children learners and adult learners of sign language -- the two different modes of acquisition so much resembles L1/L2 acquisition in spoken languages that it really cannot be a coincidence. L2 learners who learn a spoken language through immersion often speak "pidgin"; L2 learners have to study intensively (often by book) to even approach native level. Yet deaf children happily and spontaneously learn sign language with the same carefreeness as speaking children. You do not hear of speaking children saying, "I hate my native language! I want to stop studying and play!" -- because learning their native language is part of play. The same applies for deaf children. Excepting neurological disorders, young deaf children sufficiently exposed to sign language will invariably pick up sign language as long as they are young enough, even if the deaf child in question hates learning and hates school. Deaf children just can't help it. Speaking children just can't help it. It's part of the "language instinct". Adults however, are a completely different story.

le radical galoisien said...

Finally, one thing to do is just witness the Language Instinct in young children.

Note that children do not directly learn their native language from their parents. A lot of black parents actually ridicule white parents for "baby talking" to their children, as though children need to be actively taught their native language. This is not the case -- children will spontaneously pick up their native language, deaf or speaking.

The key is that they must have a source to a language community, which may be as small as 2 speakers. Deaf children not exposed to a deaf community will not spontaneously pick up sign language. If you expose deaf children to each other, but do not give them any input, or give them scattered and disorganised language input (e.g. their caretakers are broken signers of 5 different sign languages), they will spontaneously generate their own sign language.

This is not the case with adults (adults have to actively learn and intensively study a second language).


Another fascinating example is twin studies. Many twins (some up to 40%) have their own private languages. It's not just high school code ... it's a full-fledged grammatical language. (New spontaneously-generated languages tend to be like creoles -- preferring analytic, syntax rules over morphological inflection rules.) My conjecture is that nearly all deaf twins will develop their own sign language, if they are not actively separated from each other -- that is, you will rarely find an adult deaf twin who isn't natively fluent in some sort of sign language.

Anonymous said...

Just a minor correction--Steven Pinker has actually been at Harvard for most of this decade, though he was at MIT for many years before he made the cross-town move.

Anonymous said...

The fundamental permitted gestures in a sign language do not represent "ideas" -- like phonemes, by themselves they are often meaningless, until combined together. The gestures obey the recombination principle -- if you make one gesture and then another to form a word, that word is not necessarily releated in meaning to either gesture, much like "party" is not a semantic fusion of the concepts "par" and "tea".

This is where my confusion came from. You aren't really talking about making one gesture and then making another to create a word. You're talking about how handshape, palm orientation, location, speed and direction of movement, and facial expression and body posture (nnon-manual markers) combine to create the sign meaning. It's the elements of a sign you're talking about, not one gesture followed by another.

And now I'm totally unconfused. Yes, those elements are similar to phonemes. Each can be changed separately and can make minimal pairs, or nonsense gestures.

Thank you for being willing to follow through with this and clear up misconceptions.

le radical galoisien said...

Well I used examples at the phoneme level because that was the most dramatic (and most clear).

But it can occur at the morphemic level too. I no longer know enough sign language to give examples, but I'd expect there are equivalents of tack/cat contrasts in sign language (note that tack and cat are transcribed /tæk/ and /kæt/ respectively).

I would use /æt/ and (hypothetical) /tæ/, but English has its own vowel complications -- /æ/ doesn't like to be found in open syllables (except for things like bleating of sheep) and vowel quality is affected by English's stress-based phonology. Each language has its own quirks on what is contrastive.

Morphemes are meaning-bearing units, and for highly-inflected languages, morphemes and words can often be switched around in sequence with no change in meaning (though perhaps a change in emphasis or style) because the grammatical information about the relationships between the morphemes are stored in the inflections.

And how phonemes form morphemes are slightly different for sign languages because of pragmatics (individual phonemes take longer to sign; where a spoken phoneme takes 50-250ms to say, a gesture might take 500-2500 ms to make), so often words are polymorphemic, allowing sign language to be used at the same speed as spoken conversation.

There's different types of contrast at different levels of language; phonemes are combinations of elements, morphemes are sequences of phonemes; words are sequences of morphemes; sentences are sequences of words. One aspect of structural linguistics that needs to be adapted specifically is "phonology" -- it's harder to define the difference between a syllable (which in sign languages often bear multiple morphemes), a word and a sign here because the type of "silence" you literally see is different.

Spontaneous Writing said...

Very informative post. You are sharing most of the things. Creative writer can simply pull out there pen and paper and start scribbling. Any place is the best suited for creative writers.Spontaneous writing means you don’t miss out on any details.Pick up a place where you are comfortable and start writing. Even ready to write aricle at 2’O clock in night. “Actions speaks louder than words”, the reverse is true especially when you are not eye-to-eye with a person.

Cam said...

As a linguist I want to thank you for your post. There is a growing area in the field of linguistics that is applying all the theoretical principles to sign language and guess what? It works!

There are so many misconceptions of sign languages including that it's not a "real" language but that idea is just false.