For a writing system to express precise and fluent thoughts, it must be dependent on sound -- because that is the basis of communication. Sure there's art and music ... but you can't really communicate fluent and precise ideas with them, only gists. Could you communicate something like Newton's laws of physics to someone who didn't know them based on a picture, or a series of pictures?
That's what I thought!
Thank you!
This may be the aspect of 'balanced literacy' that makes me most crazy: the obsession with 'meaning.' For balanced literacy folk, reading is about extracting meaning from texts. That's if you're lucky; here in my district reading in Kindergarten is now about 'making meaning' from texts. So we are told.
Having Kindergarten children who can't read spend their time extracting meaning from (authentic!) texts is nonsense on stilts. The simple fact is that you cannot extract meaning from text without knowing what the words on the page are, which means knowing the sounds for which the printed words stand.
Spoken language is sound; printed language is a visual representation of sound. It is a translation of an aural medium into a visual medium. Like cued speech.
Thus, your basic 5-year old learning to read does not need to know the 'meaning' of the letters c-a-t. He needs to know the sounds that the letters c-a-t stand for; he needs to know that the letters c-a-t stand for the spoken word kat, or kæt in the IPA spelling.
That's because your basic 5-year old already knows the meaning of 'cat.' Seriously. Both my autistic kids knew what a cat was at age 5. They knew what a cat was at age 2, for god's sake.
What they didn't know was that the letters c-a-t, printed on a page, stood for the spoken word kat. That was the missing knowledge, not 'what is a cat?' or 'what do you make of cats?' or 'what is the author saying about cats?'
(Andrew also had to learn that the spoken word 'cat' stood for the animal. For many years he had severe auditory processing problems, if that is the correct term. I assume he still does. What I don't know - what I'd like to know - is whether he and Jimmy also have some kind of 'core' deficit in language per se. Why can't they talk? Is it because they can't 'hear' & thus can't learn the grammar of the English language the way typical children do, or is it because of .... something worse. I don't know.)
Back on topic: I remember, a couple of years ago, watching an online video of Siegfried Engelmann saying kids should be taught to read words in isolation. (Pretty sure that's what he said.) I remember finding that almost a scandalous statement at the time, and although I was inclined to take on faith anything Siegfried Engelmann said, I experienced a mild failure of nerve contemplating the image of young children reading aloud lists of words in isolation, outside of "connected text." I'd been too long in the public schools not to have had drilled into my very soul the notion that teaching anything in isolation is wicked.
It wasn't until I enlisted in the reading wars that I realized what Engelmann was talking about: he was talking about the fact that printed language is a representation of, or code for, spoken language. Printed words represent spoken words. Not meanings. Kids need to be able to read words fluently strictly from the printed letters on the page, without any context to "help" them. All good readers are able to read words outside of context.
Confirming the psychologists and educators who emphasize phonics, mechanistic letter decoding, L, accounts for the lion’s share (62%) of the adult reading rate. This is recognition by parts. Holistic word recognition, W, accounts for only a small fraction (16%) of reading rate. The contextual sentence process, S,* accounts for 22% of reading rate, on average, but is variable across readers (mean +/- SD= 87630 word/min), which may reflect individual differences in print exposure.[snip]
Understanding individual differences in reading rate would be invaluable. The breakdown in Table 2 compares the contributions of each process across observers. There is surprisingly little difference in the contributions of each of the 3 processes across our group of 11 normal readers. However, note that observers JS and KT, our fastest readers, also have the highest percent contribution of the S (context) process. This supports the idea that the context process reflects differences in print exposure [19]. Even so, these readers are fast mostly because their L processes are fast.
Parts, Wholes, and Context in Reading: A Triple Dissociation by Denis G. Pelli*, Katharine A. TillmanPLOS One August 2007 Issue 8
Fast readers are fast phonetic readers who also use context and word shape.
International Phonetic Association
International Phonetic Alphabet (pdf file)
Thank You, Whole Language at Illinois Loop
Whole Language Lives on by Louisa Moats
Whole Language High Jinks by Louisa Moats
* "Contextual sentence process" = context, i.e. the meaning of the preceding text. When a fast reader reads the next word (partly) on the basis of the meaning of what he has read thus far, he is using "S." If you're reading a blog post about balanced literacy and you spot an upcoming word that starts with 'ba' you're going to very rapidly read 'balanced' instead of, say, 'ballast.'
26 comments:
"For a writing system to express precise and fluent thoughts, it must be dependent on sound -- because that is the basis of communication."
I would modify this statement to allow for a written representation of sign language. Such a thing *does* exist (although not widely used) and I don't see why it can't express "precise and fluent thoughts."
-Mark Roulo
right
Well yes, hence the follow-up sign language post :)
The thing is, sign language is in itself really a converted form of the spoken language mechanism, and sign language obeys universal grammar so much and its elements behave so much like spoken language morphemes, phonemes, etc. that really sign languages and spoken languages together should be classified under "Saussurian languages" (or maybe just "human languages") or something.
Certainly if you tried to whole language strategies (try to guess the meaning, etc.) with sign languages, you'd have the exact same issue: sign languages aren't ideographic. The signs of sign languages are often arbitrary. (Saussure's arbitrariness of the sign.)
*tried to use whole language strategies
"The thing is, sign language is in itself really a converted form of the spoken language mechanism..."
I'll point out that it is possible that spoken language is a converted form of sign language. It looks like the FOX2P gene is required for speech in humans and also that this gene mutated/evolved/whatever around 120,000 to 200,000 years ago. So ... prior to this, no *speech*. But unless one believes that until ~200,000 years ago upright bipeds were not smart enough to have language, then sign language preceding spoken language is the logical sequence.
-Mark Roulo
--"For a writing system to express precise and fluent thoughts, it must be dependent on sound -- because that is the basis of communication."
Well, symbols work too. Mathematics requires precision, and its notation expresses precise and fluent thoughts. You can perfectly express Newton's second law, or the properties of a subgroup of a group, or any truth in topology, or anything else expressible by math without sounds. In fact, lots of mathematicians are quite comfortable thinking in symbols without subverbalizing them, and they manipulate them without resorting to written or spoken steps in between to explain the truth.
Chinese pictograms might also be able to be precise without having relied on an underlying voice or aural mechanism, I don't know. But don't denigrate symbols.
--"For a writing system to express precise and fluent thoughts, it must be dependent on sound -- because that is the basis of communication."
Well, symbols work too. Mathematics requires precision, and its notation expresses precise and fluent thoughts. You can perfectly express Newton's second law, or the properties of a subgroup of a group, or any truth in topology, or anything else expressible by math without sounds. In fact, lots of mathematicians are quite comfortable thinking in symbols without subverbalizing them, and they manipulate them without resorting to written or spoken steps in between to explain the truth.
Chinese pictograms might also be able to be precise without having relied on an underlying voice or aural mechanism, I don't know. But don't denigrate symbols.
The thing is -- if you disable FOX2P (or damage it) -- are you able to sign?
an interesting evolutionary question (with a lot of medical and educational implications)
Several thoughts:
FOX2P could be in itself an updated form of a gene that was needed for sign language ability
FOX2P overrides sign language development when activated by appropriate environmental cues
Notably, spontaneous language generation, creolisation, etc. will not occur without significant social interaction. (It will also not develop even if the child is isolated but otherwise maintains a relationship with abusive parents that speak to the child ... which would hint at some of the other nonlinguistic cues required.)
Sign language development, in the neuroplasticity interpretation, occurs when aural input is disabled and the language processing centres are able to use visual input instead....
The truth could be a mixture of these interpretations and more. Language is probably a very complex gene complex, and FOXP2 probably interacts with hundreds of genes to set up speech ability.
One curiosity is that FOXP2's relatives help with voice and phonation control in primates; if language acquisition mechanisms depend say on babbling, but the baby is unable to babble (yet still receives aural input), then a lot of language development processes might misdevelop.
I don't know a lot of FOXP2. My classes don't usually go this in-depth (yet).
It would be very helpful to explore sign language ability's relationship to the presence of a FOXP2 mutation. If you took an infant with a confirmed FOXP2 mutation, the question of whether he or she could learn sign language or not would be very enlightening.
In fact there might be some literature on the matter ... let me check Google Scholar (and maybe you know where to look too.)
Allison: but Chinese symbols are not pictograms!! lol, this is exactly what I tried to address in my "linguistics, sign language and writing" post.
(To the FOXP2 discussion, I should note that language development is triggered when the language community is small as a single child and two broken-language parents, as mentioned before....creolisation will occur where the errors will be self-corrected according to the rules of universal grammar, though perhaps not with full fidelity to the original language -- but complexity is restored. Yet put this child with two abusive parents who speak the child's L1 perfectly in an isolated environment and spontaneous language formation will not occur. Again, a very curious thing...but put the child with other children on a slave plantation or in a colonial environment and creolisation will occur.)
"Sign language development, in the neuroplasticity interpretation, occurs when aural input is disabled and the language processing centres are able to use visual input instead...."
Sign language also develops when aural input is enabled and sign input is also enabled.
This is the environment for baby signing, and it works quite well. The child picks up signing before it is able to vocalize, but also picks up the ability to speak (in my experience with no delay).
-Mark Roulo
"Well, symbols work too. Mathematics requires precision, and its notation expresses precise and fluent thoughts. You can perfectly express Newton's second law, or the properties of a subgroup of a group, or any truth in topology, or anything else expressible by math without sounds."
That's precise IMO, but not fluent.
Certainly they could do the symbolic manipulation in their heads -- but would interpersonal fluent communication be possible? Not to most linguists.
I think Steven Pinker addresses algebraic notation and formal mathematical languages in his book, The Language Instinct. Certainly you use logic calculus and stuff to calculate the truth values of a complex system of propositions, but are they feasible systems of communication that can replace spoken language? The processing is conscious, and is much slower, and not that automatic. You can be fluent in the sense you can read the symbols and process it in 5 seconds -- but written in human language it might have only taken 500 ms. The human language processing centres are very sensitive -- a 60ms difference in voice onset time can often determine for a listener whether a phone is a /p/ or a /b/ (and that determines whether you meant "I like Bob" or "I like pop".)
The other thing is that human language is biologically evolved to be organic -- it's very easy by memetic processes for a language to update itself with new grammar rules, introduce new words into otherwise a "closed set" of grammatical function words -- basically change the rules of the game. It's dynamic, organic and spontaneous, and sometimes amazingly rapid: it only took several years for a bunch of elementary schoolers and preschoolers to creolise LSN into ISN (with dramatic differences).
(This is beyond simply adding new words for new concepts, which adults do all the time.)
That is, organic processes will ensure that human language will always remain "full-fledged" and not merely become pidgin. Would there be complexity-correcting mechanisms if say the world took to adopting a formal symbolic language, to make sure the rules were never too simple or too complex? (Both can be problematic; too simple and ambiguity becomes a big problem -- too complex and it just becomes a pain in the ***. Somehow, biology automatically gets children to find a nice tradeoff for us.)
"Sign language also develops when aural input is enabled and sign input is also enabled.
This is the environment for baby signing, and it works quite well. The child picks up signing before it is able to vocalize, but also picks up the ability to speak (in my experience with no delay)."
Ah yes!
Sign language ability develops in speaking children when the deaf community is ubiquitous (this is the case I think in various communities in North Africa, the Middle East and Latin America.)
Thanks for reminding me -- certainly it makes me think twice about how language acquisition processes are triggered.
Also there's an important distinction to be made between full-fledged languages and pidgins ... homesign is usually not a full-fledged language -- in the sense that the language is usually significantly less fluent and less complex than any natural language. This was the case with LSN, which was essentially pooled and pidginified homesign. (The fact that it was pooled at all, even without creolisation, is an amazing testament to organic and dynamic social mechanisms.) The group of children that were young enough creolised this into ISN, a full-fledged natural language.
The distinction would be important because a full-fledged language development would predictably involve the activation of genes or pathways that pidgin acquisition would not.
However, it could be the case that the common instances of baby sign are analogous to the early stages of L1 acquisition -- which in itself doesn't seem full-fledged. (i.e. that stage babies talk in sentences of two words: "me juice" or "milk allgone" or "car red!".)
Perhaps baby sign isn't allowed to flourish, because the speech language dominates as the child matures and sign language development is abandoned.
--Certainly they could do the symbolic manipulation in their heads -- but would interpersonal fluent communication be possible? Not to most linguists.
Huh? They can communicate Newton's laws without words, period. They can communicate hundred page proofs, too. What's not fluent about that? What's the definition here that I'm not understanding?
--but also picks up the ability to speak (in my experience with no delay)."
Is there any evidence of this, one way or another? Anecdotally, all of the children I've met who were hearing-abled and taught to sign as babies were late speakers. Sample size = more than half a dozen, less than 10.
Do babies really only talk in "sentences" of two words, or is it that the adults only recognize two words and are glossing over that slur of sounds in between that represent the rest of the constructions?
Allison: the challenge the author of "The Ideographic Myth" posed to readers was -- could you design a nonphonetic system that would accommodate a translation of Hamlet?
Every natural language can translate Hamlet -- sure there's some finer rhyme and meter subtleties involved, and you might lose the sexual punnery, but you can do it. For every unique idiomatic construction in one language, there is at least a periphrastic equivalent (roundabout way of saying it) in another.
But it would be a feat IMO, to do this in a sort of formal, graphical algebraic language system that could be communicated feasibly. If you taught it to children -- they would almost certainly change it into a form of sign language where many of the designed "artistic" rules would be lost and many new rules implemented, and an ideographic principle would almost certainly be rejected (at least that's my prediction). Children would change it (unconsciously) in such a way that the language was 1) a reliable medium of communication 2) a very fast medium of communication. Natural sign language is communicated about as quickly as spoken language -- and it does this by having "phonological" rules (e.g. having/preferring multimorphemic syllables and words) that fulfill this pragmatic concern. The first sign languages designed by professors weren't that way though -- it was the children who acquired them who changed the rules to allow the language to be more natural.
Sorry, sorry, I know Catherine gets enlightened and pulls things up front but to the frazzled mother of two toddlers, I never get far enough through the old thread to see what I needed to know.
"Do babies really only talk in "sentences" of two words, or is it that the adults only recognize two words and are glossing over that slur of sounds in between that represent the rest of the constructions?"
A fascinating hypothesis. Possibly, it could be information babies want to express (or is information babies have noticed in their linguistic environment), but something their brains are not capable of expressing comprehensibly yet.
Interestingly, there are often discrete "2-word" and "3-word" stages, as though the child in the 2-word stage was incapable of expressing any sentence larger than 2 words -- the kneejerk explanation would be that their syntactic machine isn't developed enough yet. Most notably of all, is that babies do end up speaking in sentences, even if they are somewhat broken sentences ... whereas aphasiacs often do not (their speech stream ends up being more like a laundry list of nouns and verbs and "Tuesdays", etc.)
From a brief look at FOXP2 literature (especially since it's implicated in so many other nonlinguistic mechanisms), I conjecture that FOXP2 is required for full-fledged sign language development too.
I can't find any literature that specifically addresses FOXP2 and sign language ability on Google Scholar. Help? (Maybe it's data that hasn't been taken yet?)
"Anecdotally, all of the children I've met who were hearing-abled and taught to sign as babies were late speakers."
My "in my experience with no delay" comment was referring to my child :-) So, I've got a sample size of one for "no delay."
One difference might be that my wife and I used both sign *and* ASL quite a bit. It wasn't like one of us was deaf.
-Mark Roulo
Hmm. What in your opinion, is a "late speaker"?
It's been noted that some children take longer to acquire language than others, with some of the stereotyped "stages" lagging behind by as much as 1 year compared with another child ... but the child becomes a full-fledged speaker all the same (and while I haven't looked at the literature, I'd imagine there'd be speakers who reached the hallmark stages late but go on to outscore many "early speakers" on the CR portion of the SAT). Why this time lag exists is a mystery -- but something we notice more is that this distribution is more or less the same from language to language. I'd say there is no language that is really more complicated than another on the basis that children on a whole spend the same time learning Mandarin as a first language as they would English -- it's just the languages have different complexity in different areas (though the sum complexity would be the same) and this becomes a problem in L2 acquisition. (You might be good at processing stress patterns say, but not tone patterns.)
Do bilingual children have a delayed onset of the full-fledged characteristics of both their languages?
Again I would think it depends on just what kind of sign language you're teaching the child -- does the child grow up to be a fluent signer of natural sign language, etc.? Does it never go beyond pidginlike homesign?
Is your blogger profile email correct? I have even MORE crazy language acquisition questions than this thread can support :)
I have no trouble understanding the definition of a natural language, but I do have confusion about what a linguist calls "fluent". What does it mean?
yup it is!
but I'm only an undergrad student -- language acquisition just fascinates me. Sometimes I wish there had been an AP Linguistics or something, because I keep repeating material every class I go to (from HS to now), and I really want to get started on the "hardcore" mechanisms of language acquisition. (Which really, I haven't done.)
"but I do have confusion about what a linguist calls "fluent".
I'm not familiar with a quantitative definition of fluency either. Someone out there probably has measured more rigourous millisecond comparisons psycholingustically (there are neat techniques to measure response times).
Qualitatively, we could state that to be technically fluent is to be natively fluent -- to be equivalent or near-equivalent to native-level speed and accuracy. And we notice this as a general fact among any speaker of any natural language -- speed and accuracy tend to be the same.
Slight diversion: Now there are interesting psycholinguistic studies like speaking rate and word rate per minute and so forth but sometimes that can be misleading as the role of the "word" in a language can differ as in polysynthetic languages, a single word might contain multiply-inflected morphemes with agreements between subject, verb, object and a multitude of complements and arguably stand in for an entire sentence. So then you have morpheme-per-minute ratio -- but then again you have "is this a valid comparison?" issues. Some interesting effect sizes are observed, but it's always telltale if the intralingual variance for the parameter you're measuring is significantly larger than the interlingual variance. For things like "information entropy" and "morpheme per minute" ratio, this is the case we observe most often.
So anyway, back to universal ideas of fluency ... if you accept that all the native speakers of the world have the same level of accuracy (quantitatively linked to error rate) and speed (quantitatively linked to response time for perception, and morpheme output for production) ... then we could make a graduated scale of fluency (not necessarily linear).
Now pragmatically, we'll have cross-comparison issues to work out (like making a conjugation error in English may not be equivalent to making a conjugation error in Russian, since conjugation is more of a frill in English than it is in Russian), but if you accept that we could somehow normalise the data for error rate, morpheme production rate, response time, etc. then we now have a preliminary working definition.
Could users of a formal symbolic language, existing only graphically on paper (or on the whiteboard, on pdfs, or on overhead projectors, etc.) approximately attain or surpass the "fluency" of natural languages? My immediate intuition tells me no -- not until we start installing neural implants or something (and as a transhumanist, I'd like to see that happen).
There's also the neurobio perspective to look at; swearing for example, lights up the limbic system more (e.g. the amygdala) than it does linguistic centres. In fact, people with linguistic aphasias retain their ability to swear...it's almost to the extent that swear words have emotional associations with it, not linguistic-semantic ones.
If you use a formal symbolic language, you probably won't be activating your linguistic centres too much either -- you'd be using your general frontal cortex (as a whole) more. The frontal cortex is good at a lot of things, but except where linguistic areas interface with it (on the borders) it's not really as specialised (and therefore generally not as efficient) at interpersonal communication.
I mean efficient communication in the sense of immediately being able to paint pictures and concepts in each other's minds -- not "efficient" in the sense of, "this guy took 100 pages to do his proof. I only took 40 lines! QED!"
And of course, there's always much more complex interactions going on than the traditional picture of a modular brain would indicate, and there's the whole neuroplasticity aspect (if you damaged your linguistic centre as a very young child, your neurosurgeon might find its new location is in a strange spot compared to most people). I find language processing fascinating because it typically has to integrate incoming speech streams (as well as produce outgoing ones) -- data we usually require Fourier transforms to acoustically analyse --, semantic and cognitive signals, as well as visual information (typically reading/writing graphemes for speakers; phonemes for signers), all in a fairly unconscious and "automatic" way.
Usually for native speakers the conversion of thoughts into language is so efficient that we have concepts of "thinking in English", "thinking in French", etc. But such ideas really should be modified to, "thinking rapidly converted to English", "thinking rapidly converted to French". When this conversion mechanism is not fully developed or is defective, you get babies who want to communicate, "I want juice....now!! At this very moment!" but their syntax machine only can handle two arguments. (Error: $var_3 not yet supported.) Or Broca's aphasiacs who want to say, "This is embarrassing! I know exactly what's going on! I have a stroke and I can't speak!" (and they can still shout, swear and so forth) .... but can't utter anything but gibberish.
Post a Comment