Look over these 3rd grade Texas silent reading compehension tests and see what you think before the reading the rest of the post.
Here are the 220 Dolch Sight Words and the 95 Dolch Sight Nouns (scroll to bottom for list).
Now, here are all the words in the first paragraph of the first story that are included in these 2 lists above:
They, had, cows, horses, and pigs, a, dog, cat, bird, were, the, chickens, she, eggs, from, but, to, eggs, by, her---- [her is a sight word, actual word is herself], ?henhouse [hen and house are sight words, you may or may not be able to figure that out]
Here are all the above sight words in the answer for question #1:
where, does, live, by, a, in, the, on, farm
Coincidentially (or not?), the only answer that consists of all sight words is the correct answer.
The second story, all the connecting words are sight words, and the nouns that are sight words are: horse, dog, car. While this selection has a slightly lower percentage of sight words overall, a smart student can figure out the entire story from the picture.
The third story, the noun sight words are: kitty, dog, bird, house, box, ?parties [sight word is party, you may or may not be able to figure this out], ?sunflower [sun and flower are sight words], song, seeds, girl. There is also a picture that tells most of the story.
There are also elements of an IQ test in several of the questions.
This is why Geraldine Rodgers, in her book "The Case for the Prosecution, Charged with the Destruction of America's Schools," says,
“All those silent reading comprehension tests are a massive fraud. Back before 1911, when Binet of France originated the FIRST real intelligence tests, he used oral reading comprehension to test native intelligence, which is itself un-teachable. Binet’s reading comprehension paragraphs are STILL used to test intelligence. So reading comprehension scores are really IQ scores!”You can read more about her book and the link between reading comprehension tests and IQ in this post.
On my webpage, I describe how poor sight word taught readers guess their way through texts and have more quotes from Rodgers, and explain how this is hurting vocabulary acquisition. I also link to a comparison of Romans 12 in the KJV to the vocabulary dumbed down NIV and show how it looks to a reader taught with sight words. Here is the first paragraph of each below (KJV first), black words are in the first 2,000, so could be read by most adults taught with whole word methods, red words are 2,000 - 5,000, readable for most but more difficult for someone with a poor visual memory, purple for 5,000 - 10,000, and the first and last letter only of words outside the most common 10,000 words--you would have to guess the word from the first and last letter and the context. (Those with poor visual memories would have to also guess varying portions of the red and purple words, as well.)
I b_____h you therefore, b______n, by the mercies of God, that y_ present your bodies a living sacrifice, holy, a________e unto God, which is your reasonable service. And be not c_______d to this world: but be y_ transformed by the renewing of your mind, that y_ may prove what is that good, and a________e, and perfect, will of God.
Therefore, I urge you, brothers, in view of God's mercy, to offer your bodies as living sacrifices, holy and pleasing to God–this is your spiritual act of worship. Do not c_____m any longer to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God's will is–his good, pleasing and perfect will.
Now, the Reading First connection.
A spelling test or a phonics skills test is a better measure of reading ability up to grade 3 (and even after, but especially up to grade 3 when it is so easy to pass a reading comprehension test by guessing.)
The Reading First Impact Study Interim Report, page 28, states,
"The RFIS had initially planned to use a battery of individually-administered tests to assess students across the specific components of reading instruction targeted by the legislation: phonemic awareness, phonics, fluency, vocabulary and comprehension (No Child Left Behind Act, 2001). When the study’s design shifted to a RDD, with a quadrupled number of schools and students in the study sample, the individualized student assessment data collection was no longer practical."For various reasons described on pages 28 - 29, they selected the SAT 10 Reading Comprehension test.
Many of the students at the schools in the Reading First study are also, unfortunately, the type of students especially vulnerable to doing poorly on a test that includes IQ as an element of the test. These are also the students that benefit the most from explicit, systematic phonics.
Guess what grade levels were testing with these silent reading comprehension tests for the Reading First study?
Grades 1 - 3.
29 comments:
I'm an admirer of Geraldine Rodgers' work (and even more of her prescience and dedication) but there are some inaccuracies to bear in mind. One is the concept that "native intelligence" is some kind of fixed entity that cannot be altered. Another is that "reading comprehension" is a unitary construct.
Like height, "intelligence" has a genetic basis, but what the individual inherits is a potential for a range of development -- and that range is extremely malleable. Furthermore, "reading comprehension" is composed of identifiable, discrete component skills such as vocabulary, relevant background knowledge, fluent word decoding, cognitive strategies, etc. All of these are learned -- and can be directly taught.
Two books, now out of print ( but available in libraries or from used-book sites) which elaborate on these themes are Arthur Whimbey's Intelligence Can Be Taught and Zig Engelmann's Give Your Child A Superior Mind. They both contain much historical and research data, recapitulated for the layperson or interested reader.
It's true there's a strong correlation between IQ and academic performance, but that tells us nothing about the manipulability of the variable. I suggest that "potential" is something that can never reliably be inferred -- it is much too dependent on environmental effects. This goes for almost all inherited characteristics (see The Dependent Gene, by David S. Moore). Their phenotypical expression is dependent on interaction with the environment. Mozart's musical genius might never have developed had he been raised in a home where no one was at all interested in music.
Some factors that ought to make more educators than do ask questions about the fixed nature of IQ include the often-noted tendency of kids who fail to learn to read to drop in IQ steadily from K to upper elementary till they are no longer eligible for any "LD" designation based on a "discrepancy between achievement and ability." Conveniently, they get stupider every year so that the system finally can say it has no obligation to provide special services because they are reaching their (limited) potential. Maybe, if the kids are getting dumber each year, the "IQ" genes must have carried expiry dates??
For another study that shows how variable IQ can be depending on classroom and instructional effects, see another Harvard Educational Review article (never sufficiently appreciated for its extensive data analysis and insights which were out of synch with the political ideology of the era):
Pedersen, E.,Faucher, T. A., & Eaton, W. W. (1978). A new perspective on the effects of
first-grade teachers on children's subsequent adult status. Harvard Educational Review, 48(1), 1-31.
Here's my take on the issue.
The measurement of IQ in children has a Schrödinger's cat problem.
By educating a child you are often teaching to an IQ test. If you expend great effort and teach a low-IQ child to read much better than the child's IQ would predict, an reading achievement-based IQ test will likely indicate that the student has boosted his IQ. But, give the child a highly g-loaded IQ test like Raven's and the child's IQ will likely show no increase. The child's real IQ hasn't changed and the child will still perform like a low-IQ student with repstec to acquiring background knowledge, vocabulary concepts, and othe things no explicitly taught in school. The child can read better and will find it easier to acquire academic content with good instruction and will likely perform better on achievement tests as he grows older. However, there will be a slow decline in IQ back to the child's baseline on achievement tests, like reading comprehension tests that increasingly test the child's background knowledge and not his reading ability.
This low-IQ child with superior reading skills will still be less capable from learning from his environment than the higher IQ children and a specia effort must be made to explictly teach all the facts, vocabulary, and background knowedge the child needs.
This comes from my reading of the Follow Through results.
Reading, spelling, and math skills were increased significantly in the DI students. Their Raven's results did not improve significantly (nor did they increase for any other intervention). Once the DI intervention ended and the student returned to regular classrooms their performance gradually eroded back to what their IQs (as measured by Raven's) would have precicted. Longitudinal studies showed that these students performed somewhat better than their IQs would have predicted but the pace of the gains made with DI was not sustained. In other words, the students performed more like their "genetic" IQ than their "DI enhanced" IQ after the DI intevention. This, the DI curricula has been revised to provide more explicit instruction in vocabulary, language, and facts in the comprehensive whole school reforms.
My 2 cents.
".. is a potential for a range of development -- and that range is extremely malleable."
I like this explanation.
It also depends on the level of expectations. A bell-shaped curve doesn't exist as the questions get easier.
I find that educators talk about all sorts of fancy things but never consider that schools barely expect more from kids than tying their shoes. OK. That's extreme, but I tell people to look at the questions on the state tests.
The assumption is that because the scores are so low, the tests are difficult. One educator called the NAEP test "the gold standard". I look at the test and wonder if kids are really that inherently stupid. No. Schools then look only for relative improvements, rather than at fundamental problems.
Someone said on another post that this is NOT rocket science. I agree. It isn't for most kids. I've harped on the idea (from programming) that you need to look carefully at the error and work backwards. Most schools use top-down guess and check to solve problems; the more complicated the explanations, the better.
Many of the kids at our school didn't do so well on the reading comprehension part of the state test. They have to read short passages and then answer questions. Well, it's perhaps because the school doesn't practice that skill at all. Duh! But our school talks about how kids have different learning modalities. No. The problem is that poor results happen because the school doesn't prepare the kids for what's on the test (which is defined and calibrated by teachers).
It doesn't matter whether they like the test or not. That's a separate issue. If they think that some other kind of learning makes up for doing poorly on those questions, then they need to come out and tell the parents.
This makes me think of something else. Has anyone ever heard of a school that assigns a class of kids to one teacher for all of K-6? Once the kids finish 6th grade, the teacher starts with a new batch of Kindergarteners. This is a horrible thought if your child gets stuck with the wrong teacher, but it would be an excellent way to judge the effectiveness of teachers.
waldorf schools.
"In other words, the students performed more like their 'genetic' IQ than their 'DI enhanced' IQ after the DI intevention."
But what about comparing DI with constructivism; High expectations versus low expectations? What are the long term effects? If IQ relates to potential, then what if all kids are being taught well below their potential?
Perhaps the problem is that some blame poor student results (on trivial material) as a reflection of some genetic capacity or intelligence. With seriously low state test expectations, this can't be the case. I believe that using techniques like DI can provide fundamental, long-term improvements for kids no matter what their IQ.
As you improve education, they gap between the those with high IQ and low IQ should remain the same, but the education level will be much higher for all. The goal is not to change the slope of the curve, but to raise it. To some, this might seem like raising native intelligence, but it could just be that the kids were already far below their potential.
>>But our school talks about how kids have different learning modalities.
There is a grain of truth here. I find the mismatch between learning style and teaching style in school to be significant when the child is visual and has significant auditory weaknesses (as happends with many many children who had chronic ear infections at a young age), and he's stuck with an all auditory teacher. Our biggest problem in math last year was having this situation. M kid learned that it's pointless to sit through a 40 min. class when the instructor looks and sounds like the blind men describing the elelphant. It's even more frustrating to the child when he is made to feel like a fool when he can't understand the rambling teacher yet he knows he can come home, read the reference text, get the concept in a flash, and explain it to his friends in homeroom. A classroom situation such as this is pointless. The scary thing is that every full inclusion class is like this, because the fully included can't read on grade level.
Steve, I agree.
Increasing educational outcomes is a worthy goal in and of itself, regardless of whether real/sustained IQ gains are being made or whether an achievement gap persists.
This is why DI is generally a better educational path than constructivism.
Reading achievement and IQ in children are not closely correlated. Naturally, reading well and early enhances IQ by providing more intellectual challenge, increasing background knowledge and thinking skills, etc.
Schooling effects (per se) are not determinative of IQ, either. The Follow Through studies showed that DI-taught children generally lost ground academically after they went into the (mainly lousy) low-SES schools their peers attended. No surprise there.
Twin studies are the best source of data re both the heritability AND the malleability of IQ. While most homozygous twins raised separately perform at similar levels as adults (and often display creepy similarities that have no obvious explanation -- like marrying women with the same first name), in most cases the environments where they were raised are also similar to each other, so the genetic and environmental variables are confounded.
In cases where the environments were radically different, the twins' adult IQ's were also radically different, by more than 2 standard deviations in some cases (can dig up the references later). This is unusual, but the fact that it does occur demonstrates that IQ is not fixed. As with other inherited characteristics, it has a range of possible phenotypical expression, and that range is heavily dependent on environmental variables.
The Bereiter-Engelmann preschool project also showed that early IQ gains persisted into adult life (those students did not regress to their "genetic IQ"). The concept of a "genetic IQ" as a fixed entity is empirically weak. Genes are interactive with environmental variables and thus effects are modifiable.
The Harvard study also showed that increased-IQ effects persisted into adulthood.
"Genetic geniuses" could be raised in a severely deprived environment and thus fail to develop the cognitive strengths their genetic heritage would predict (remind me to post about the "Larry" case sometime).
What can safely be said is that genes may "take their course" if you do nothing (as they will with many other genetically-influenced conditions -- diabetes, say, or heart disease). We are only at the beginning stages of knowing how to teach to enhance cognitive function. While it is not feasible at this point to do it on a large scale (heck, we can't even teach reading effectively on a large scale), we do have the knowledge to enhance cognitive potential so that children will develop cognitive strengths at the upper end of their range rather than the lower end. This range can easily be much more than 1 standard deviation, and be the difference between a child becoming an illiterate HS dropout or a skilled tradesperson or engineer.
"There is a grain of truth here."
I agree, but my point is that they ignore the bigger problem. They don't like to directly teach the skills needed for the test, using whatever techniques will work.
The other issue is that teaching to some modalities requires more time and that all kids have to do the same thing. You can always slow down the pace to cover the material better. In our school, other modalities usually means art work, and that takes more time. It's visual, but so is reading the textbook.
My son doesn't need art to learn his science terms, but he is required to draw and color a 3 X 5 inch card for each. Students may have preferred ways of learning, but schools don't separate the kids into different learning groups. Everyone gets everything.
Our schools use full inclusion and it seems to me that the art work modality is a great way for them to set different expectations for different levels. It is more a tool of expectations than learning styles.
I agree that reading isn't well correlated with IQ, especially if you're measuring reading accurately with an oral reading test.
However, most reading comprehension tests have a few questions that someone with a low IQ that is a good reader will be more likely to miss. Look at question #10 on this test. I can imagine some of the less intelligent students I worked with (who eventually did learn to read well with phonics!) would have difficulties with this question, even though they would have been able to read it well.
Literacy level is more highly correlated with earnings than IQ:
http://www.thephonicspage.org/On%20Phonics/profitable.html
Let me clarify. Reading comprehension tests are mostly tests of decoding ability in grades k-3 and increasing become tests of comprehension ability thereafter. Comrehension ability depends somewhat heavily on vocabulary and background knowledge whcih are more highly g-loaded skills. Nonetheless, overall reading comprehension is not a highly g-loaded skill, unlike say, determining analogies.
There is a moderate correlation, about 0.4 to 0.5, between IQ and reading comprehension. So IQ might account for about 20% of the variation in reading comprehnsion scores. But, this gives higher IQ students a significant advantage on (low g-loaded) reading comprehension tests, other environmental variables being held constant.
Because of this, lower IQ kids, even the ones taught to read well, are at a disadvantage on reading comprehension tests unless a very concerted effort has been made to explicitly teach the student vocabulary and background knowledge which is not typically done in schools.
I believe this is why comprehnsion scores steadily decline even with in good programs like DI.
The adoption studies showed that IQ was malleable in younger children, but any IQ increases generally washed out by age 17. See for example the Minnesota Transracial Adoption Study.
And the malleability of IQ greatly depends on the g-loadedness of the IQ test. So, as I pointed out above, students receiving the very effective DI intervention for four years in Follow Through showed significantly higher gains in achievement tests which could be interpreted as IQ gains, but did not perform much differently on the highly g-loaded Raven's Progressive Matrices test, indicated that their IQs had not really been raised as much as the achievement tests would lead us to believe. And we would expect to see lower performance outside of the intervention, which is what we saw. These kids still need superior teaching (less ambiguous presentations) and more repetitions (for retention) to continue the pace of their education andfor them to see superior performance on achievement tests, at least the ones that aren't highly g-loaded. That is a worthy educational goal.
Environmental factors can have an effect on IQ, but it appears that the effects are mostely negative, such as thenegative effects caused by severe malnutrition, lead poisoning, and growing-up in very low language environments.
"Reading comprehension tests are mostly tests of decoding ability in grades k-3 and increasing become tests of comprehension ability thereafter. "
Actually, as this one shows, depending on the words chosen and how many of them are sight words, they can be passed by someone who can't sound out a single word yet can read those 220 Dolch sight words and 95 Dolch nouns. Most of the students I've tutored would have been able to pass this test but can't decode at all. They can read their leveled readers or their vocabulary controlled school books, but they can't read a book written without vocabulary control, even if it is below the grade level they would test at using a reading comprehension test.
A much better way to test true reading ability is an oral reading test.
If you need a quick standardized test that is not oral, a spelling test is probably better correlated with true reading ability (not just the ability to read several hundred sight words and guess at the rest of the passage from the first and last sound of the word) then a silent reading test.
What I should have said was a properly designed reading comprehension test.
I think the problem with the oral reading test is that it relies on the teacher subjectivity and the reason for these standardized test in the first place is to provide an independent check on the school/teacher.
I even think the spelling test is a bit problematic as a reading test. Encoding ability might correlate well with decoding ability, but it is easier for a student to decode and therefore might be able to decode the words (i.e., read them) and not be able to encode them.
I've asked this question before and there seems not to be an adequate standardized test that accurately measures reading ability.
I've asked this question before and there seems not to be an adequate standardized test that accurately measures reading ability.
Yes, you have asked that question. But you didn't make it a very specific one. Were you asking about group-administered tests, criterion-referenced tests, individual assessments, norm-referenced assessments, holistic/performance assessments, measures that assess both comprehension and component skills, diagnostic measures, primary grade reading assessments, upper elementary assessments, or what?
Since your query was so vague - rather like asking "what's good to eat?" -- I didn't try to answer it. I guess no one else did either. However, that does not mean that there are no valid, accurate or useful measures of reading achievement.
Geraldine Rodgers wrote:
"So reading comprehension scores are really IQ scores!”
Does anyone have any examples of this issue related to math? Does this mean that doing well with a constructivist or discovery approach is more of a reflection of IQ than it is of knowledge and skills? When teachers try to get kids to discover things, are they really trying to teach something that can't be taught? If having the light bulb go on improves understanding, then this happens mostly for the higher IQ kids.
Generally, only few students in a group discover anything. So the other students don't get the benefit of the discovery process and can't learn how to do it better. That's why I've always thought that any attempt at discovery should be done with homework. (if at all)
Teachers try to lead kids down the discovery path with leading questions. A few might catch on quickly and make the connection or discovery, and that will make the teacher feel good, but what about the rest of the students who are going "Huh?".
I'm not big on IQ because it tries to reduce intelligence into one number. (Objective functions are crude predictors at best.) How can you then look at the questions on an IQ test and then claim that they are not appropriate for a regular test, except in a very general sense?
I'm not disagreeing completely because I see math curricula which seem to be based on trying to improve things like critical thinking? Can you really do that? How much of critical thinking is IQ based, and how much is knowledge and skill based? Since many math curricula aren't big on knowledge and skills, then how do they expect to develop critical thinking?
What IS critical thinking? Is it the ability to apply previously learned knowledge and skills? Is it the the ability to apply them in new or unique ways? Is it some magical independent skill? Well, if you don't have the basics, you're completely out of luck, even with a high IQ.
If modern education is about process and not mere facts and rote skills, then what do they have left? IQ, which can't be taught?
My impression is that our schools are nowhere near this level of thinking. They just do stuff. They talk about ideas that sound good, but that's it. Their big problem is how to do anything in K-6 classes which have a huge ability spread. They're still trying to get some kids to master adds and subtracts to 20 in third grade. Any talk of critical thinking and understanding is superficial at best. It's all about process. Their process.
The gray oral reading test (GORT) is a fairly good measure of actual reading ability.
There are other good tests out there, I'm sure, I haven't done much research in this area.
An ideal test would use nonsense words as well to figure out their true decoding ability, a list of words to see how they read words, a reading passage to see how they read stories.
I agree that a spelling test would miss a few students that are good decoders but not great spellers, but I believe it would be a better measure than a silent reading test if what you're testing is actual reading.
I was being vague because the NCLB statute is vague. :) NCLB is concerned with "proficiency in ... reading or language arts." That is the extent of the guidance given to us.
We know that reading is a complex skill comprising various subskills and content knowledge. What does it mean to be a proficient reader? What standardized test or battery of tests exist that accurately measure the "reading" ability of children and whether they are proficient?
Further, under NCLB it is the educators whose performance is being measured, even though the students are the ones taking the test. So the testing instrument must not allow subjectivity and must not be capable of being gamed by the eduator. For example, Elizabeth's example is test that can be gamed by an eductor since students can be taught to memorize the words appearing on the test and, thus, the test is not a true reflection of reading ability.
So, pretend you are a new superintendent of a school district who wants to determine the reading ability of the children attending the schools in your district and how well they are being taught. So for example, you want to know that your third graders are reading at a third grade level and will be capable of reading at a fourth grade level next year. You get to pick the standardized test(s) to be used. You will have non-reading-specialists monitoring the administration of the test(s). The monitors can identify outright cheating by teachers and/or students but nothing more subtle than that, i.e, they are incapble of making substantive determination related to reading of any kind Otherwise, the administration of the tests is out of your control. Only the results of the test(s) will be reported to you.
What assessments do you select and why?
(I'm going to make this a new post -- so don't answer here answer in the comments of the new post.)
Steve, see this post.
The short answer is that the quality/clarity of the teacher presentation determines the amount of brain power the student needs to learn what is expected from the presentation. The rest is practice.
I've harped on the idea (from programming) that you need to look carefully at the error and work backwards. Most schools use top-down guess and check to solve problems...
I know this thread is mostly about reading but I have to interject a great math example of working backwards to spot a problem.
I had a girl in my math master's group who was missing a lot of problems in the "fact drill" portion. One day after practice I said, let's look at all these ones you got wrong and see if we can find a problem. We noticed they were all subtraction, and oddly enough, she was always short by 1. I asked her how she did these problems (most kids would just "know" them because subtraction facts SHOULD BE automatically recalled...but we know a lot of kids haven't been taught that way...). She said she always uses the technique of "counting up": she starts at the lower number and counts up to the higher one. Problem is, she never counted her destination number, so she was always short by one. Voila! I wish I could say this story had a happy ending but so far, no. Although she knows what she is doing wrong, it's such a habit now that she has not been able to break it.
She's in fifth grade. And I'm just a parent volunteer, not a teacher. How come no one noticed this before?
"Problem is, she never counted her destination number, so she was always short by one."
She shouldn't be counting numbers. She should be counting jumps. If she is subtracting 8 from 12, then she should think to herself: "Jump from 8 to 9 is one, jump to 10 is two, jump to 11 is three, and jump to 12 is four. I don't know when I started thinking that way, but I wasn't taught it. I sometimes think of the jumps as chunks or blocks between the integers. Maybe she should think of marbles. Put 8 marbles in one hand and 4 in the other.
I know that counting up is a fast way to calculate differences. If you have to subtract 85 from 169, then this becomes 15 + 69 = 84. I know my brain finds it easier to see the difference between 85 and 100 and then add the amount above 100. When I add the numbers in my head, I do it left to right. If there is a carry, I go back and make the correction.
Yup, I don't disagree that counting up (or counting jumps) is a very effective mental math strategy for problems with larger numbers like 169-85. It's also a good way to think about subtraction b/c it involves a number line and I really like number lines for conceptualization.
This is one of the few things I liked about Everyday Math--mental math stratagies like this were one of the few things that seem to have been explicitly taught.
But the fact drill in the math competition has 75 simple questions like 13-7, or 5*4-8. Here, my concern for a 5th grader is two-fold: 1) you can't be fast and accurate if you don't just know these facts at the automatic level (this shows up pretty quickly in a timed fact drill) and 2) she had been doing it wrong for so long that she was having a really hard time correcting it.
Another gripe I had in coaching this is the heavy emphasis on order of operations. What's the point of trying to solve a string like this:
4-3*12/4-7+2*3
While it's important to know OOP, we should be teaching kids from the outset to use parentheses for clarity and not rely on OOP.
4-(3*12/4)-7+(2*3) can be understood and evaluated so much more easily.
Anyone else have a thought on that? It's just a little pet peeve of mine.
I'm old enough, or perhaps lucky enough to have never been taught order of operations. My teachers would have considered reliance on 'order' to be inappropriate and therefore wrong.
Unfortunately it has become a state standard so I have to teach it to kids that can't add yet.
4-(3*12/4)-7+(2*3) can be understood and evaluated so much more easily.
Anyone else have a thought on that? It's just a little pet peeve of mine.
I was taught to use parentheses; order of operations was a side topic. I've never paid it much mind, until my daughter had to learn it.
I agree with Barry. As a programmer I never rely on order of operations but always use parentheses. It's too hard to read and too easy to make a mistake. Order of operations is bascially a convention.
Understanding that order doesn't matter as far as the order of addition and subtraction or the order of multiplication and divison is important, however.
And, of course, if this is used on standardized tests then you are stuck.
Some knowledge of OoP can't hurt.
What happens when you have (2 + 3)^2.
It helps to know PEMDAS (do parens before exponents). I can imagine a kid getting 13.
What happens when you have (2 + 3)^2.
You rely on parentheses as Susan J said. I was taught, when in doubt, use parens. Similarly, the parens take precedence: do what's inside the parens first. So I have nothing against teaching students that, but the other orders are conventions as Susan says.
4-3*12/4-7+2*3
I never really studied OOP until I started to program.
The problem is that the above equation is typed in on one line, like in a program. It wouldn't be so bad if you wrote it like this:
4 - 3*12/4 - 7 + 2*3
If it was written in normal math form and not just to fit on a linear line of text, it would be even more clear.
I must have learned about order of operations early on, but I don't recall that it was ever a problem for anyone. It is a problem, however, when you type in equations with characters.
In my programs, I use parentheses often to make sure that everything is quite clear. When I write equations on a piece of paper, I don't feel a need to do the same thing.
Math curricula should make sure that kids learn math with paper and pencil. My brain reacts quite differently to traditional math notation than it does with a linear or computer text equation. I can't "see" the equation when it's on a line of text.
I do math on paper, not on the computer. Computer text comes only after I'm done with the math.
The only difference between
4-3*12/4-7+2*3
and
4 - 3*12/4 - 7 + 2*3
is the spacing, and certainly writing it that way does help clarify it.
But the point of the order of operations exercises is that you don't have any visual cues other than the numbers, symbols and their orders. Parens are used in the questions only if they change the answer.
E.g., you might see 3 x (4-2) but you won't see (3 x 4) -2 because is the "same" as 3 x 4 - 2.
And, the symbols used are the "x" not the star, and the linear division sign not the slash "/". I always use stars or dots b/c they are clearer than the "x"'s for multiplication, but the schools don't do that. I don't like "x" for linear operations b/c it confuses the kids when it comes to algebra and x is the first variable they are introduced to. Also, the visual cues for order of operations become very clear when you drop any unnecessary operator notation, don't they, as in 3(4-2).
As Paul noted, he ends up teaching kids order of operations when they can't even add yet. And let me tell you, order of operations isn't that easy for the kids to think through and apply! Why rely on the convention? If our curricula emphasized using parens for clarity (not just when necessary) I think this would help a lot of kids.
Well, maybe I will end up teaching middle school math in my next life, too. One never knows.
The real issues is that there's nothing INHERENT about order of operations--it's a CONVENTION. I'm not even sure grammar school math teachers understand that. The symbols are just symbols. We are taught to give them meaning, and that's great, but to go out of one's way to make people learn a convention JUST to learn the convention is cruel.
That's why it's so mind numbingly awful that they test young children on it. Learning it isn't teaching them any math, it's just making it seems like "math has weird rules and is tricksy on purpose, and no one can be bothered to be clear so you have to learn this meaninglessness."
Later, when abstracting matters, and it becomes clear that you COULD have chosen any convention you wanted, but we chosen THIS one for clarity, then it makes sense to teach it.
Post a Comment