Pages

Monday, December 29, 2008

Liars Figure and Figures Lie (sometimes)

If you want to get a look at systemic performance of a school system's instruction it's useful to compare grade level performance on standardized tests. My district uses testing from the Northwest Evaluation Association (NWEA). Three times each year our kids are tested on their Measures of Academic Performance (MAP Tests). This is a computer driven adaptive test.


As kids answer questions (English and Math are tested separately about 80 minutes each) the tests adapt to their answers, seeking to produce appropriate questions for wherever they are academically. As a result we get RIT scores that are independent of grade level, i.e. if a third grader and an eighth grader get the same score, they've answered the same stuff correctly. You can read all about the scoring and nature of the tests on their site.


** WARNING: A bit of math is next. If you are math averse skip this section. ***


Academic tests, the length of peoples arms, the height of trees, and tons of stuff in nature conform to a normal probability distribution, more commonly known as the bell shaped curve. The curve is symmetrical about the mean of what you are measuring and it has the propery that the area under the curve, over some range, represents the percentage of people in that range. So for example, the area under the entire curve is 1 or 100%. The area from the minimum to the mean is .5 or 50%.


The shape of the bell can be narrow or broad as determined by the standard deviation of the data. Without going into the math just think of the standard deviation as the amount that the data varies from the mean. If you grow a certain variety of tomato in your back yard you might expect your tomatoes to have a wider range of diameters than the same variety grown in a greenhouse with controlled lighting and feeding. A perfect crop might have a distribution that looks more like a knife blade than a bell and a really bad crop grown next to your compost pile might look more like a mushroom than a bell.



*************** End of Math Warning *****************


For kids taking consecutive math courses in a perfect system you'd expect to get bells that all have roughly the same shape, marching across the page with equal spacing between them. NWEA expects about 8 points of shift in the mean for each year. Below is a graph from schools in my district showing math scores for grades 3-8. Each grade is represented below as a distinct curve.



Notice that the means (at the peak of the curve) are marching across the page as you'd expect but there are two concerns. One is that the means are not moving by 8 points which is the national average. The other is that the standard deviation is getting bigger, making each successive year exhibit a fatter bell. The right sides are moving. The left sides are relatively anchored. The peaks are lowering to make up for the plumper curves.


You can do some math on these curves to show that fully 33% of the eighth grade is performing at a level that 95% of the third grade reached. 29% of the 7th grade, 51% of the 6th grade, 83% of the 5th grade, and 91% of the 4th grade are mixing it up in their respective curricula with the same skills that 95% of the 3rd grade has.


We have the following systems driving this data: no retention policy (we even have retention tryouts), no remediation policy, group work, spiral curricula, constructivist, discovery 'learning' and an enormous (>30% transient) population. The data shows how it's working. These are system wide policies independent of school, text, or teacher quality.

In the normal distribution the mean value (at the peak) is the median. This happens because it's a symmetrical distribution.


For a teacher, this means your targeted lesson is the median child. 50% of your kids are above it and 50% are below it. Theoretically you aren't targeting anybody in the room. Of course this is where theory and practice diverge and of course you are bound to 'hit' someone with your lesson. But, the reality is that with all those 3rd graders in the room your attention is mostly focused on the neediest.


The panacea (silver bullet) for this is differentiation. If you really want to fix it though you have to start demanding mastery while at the same time doing away with arbitrary (non-academic) grade level groupings.


If you can replace grade levels with placement based on demonstrated mastery, differentiation goes the way of the dodo bird and all those fat bells go on a diet. You'll likely need less teachers while producing better results. It's not class size that matters, it's standard deviation.

18 comments:

  1. I'm not having any luck with blogger yet. I usually compose in One Note but Blogger seems to have a grudge for One Note.

    ReplyDelete
  2. Your pictures undercut your argument.

    Your bell curves are showing that, whatever the source, over time, some kids are learning more than others because some kids are learning more quickly than the others are.

    Grade level groupings may be arbitrary, but "placement based on mastery" doesn't solve the problem that some are learning at lower rates than others are. So even if today, you placed together all of those who'd mastered multiplication, a month from now, only some of them will have mastered long division.

    Regrouping based on mastery, be it every year, or every month, or every day does not change the rate at which these kids are gaining mastery.

    There's no data to show that anything you do will ever make the slow kids into fast kids. (If there's data showing that you can ceteris paribus, increase a kid's rate, on the margin, I'd love to see it.)

    So then, in order to make sure that the slower kids progress, what you need to do is give more time to the slower kids, because distance = rate X time. They need more time to go the same distance.

    "Placement" based on rate AND current starting point would work if you then increased the time spent working for those kids.

    But nothing will ever stop the fast ones from pulling away given the same amount of time.

    The goal can't be "placement based on mastery" only. It must include recognition of the learning rates, and must include a mechanism for increasing the time spent working for those with lower rates so that they reach a minimum amount of forward progress per year.

    ReplyDelete
  3. Your graph is interesting to me in other ways not directly related to your point. It appears that grades 6, 7 and 8 are moving awfully slowly forward, but are substantially separated from 3, 4, and 5, also moving awfully slowly foward. that is, it looks like something's been different about 6, 7, and 8's experience than 3, 4, 5--that gap between 5 and 6 is huge.

    since i'm guessing all of the social factors you brought up are basically the same over grade (transient population, SES, etc.) what happens/happened in the last three years to the 3-5 curriculum, or to the 6-8 one? any ideas?

    ReplyDelete
  4. Your graph is interesting to me in other ways not directly related to your point. It appears that grades 6, 7 and 8 are moving awfully slowly forward, but are substantially separated from 3, 4, and 5, also moving awfully slowly foward. that is, it looks like something's been different about 6, 7, and 8's experience than 3, 4, 5--that gap between 5 and 6 is huge.

    since i'm guessing all of the social factors you brought up are basically the same over grade (transient population, SES, etc.) what happens/happened in the last three years to the 3-5 curriculum, or to the 6-8 one? any ideas?

    ReplyDelete
  5. The goal can't be "placement based on mastery" only. It must include recognition of the learning rates, and must include a mechanism for increasing the time spent working for those with lower rates so that they reach a minimum amount of forward progress per year.

    Allison your argument based on learning rates makes a lot of sense to me. I suppose it is one of the reasons why rate-based "tracks" are effective. In an accelerated track, for example, kids having about the same level of mastery and fast learning rates cover more ground per unit time. It becomes a "track" because the academic distance they traverse over a given amount of time is so large that it's difficult to move in or out of the track.

    One thing I question though is whether kids with slower learning rates really do need more time if taught in appropriate settings. Perhaps the slow learning rate is in part a function of heterogenous grouping, which causes teaching to be over their heads. If they were grouped with others at the same mastery level and with the same (slower) learning rate, I bet that focused instruction would cause their learning rate to increase. Could be that "extra" time would not be needed in order to reach minimum levels.

    ReplyDelete
  6. I too noticed a big gap between grades 5 and 6 in the data. Does this imply something especially effective in 5th or 6th grade?

    As for the slowdown in grades 7 and 8, maybe the middle school model is to blame. Cf. Cherie Pierson Yecke, The War against Excellence: The Rising Tide of Mediocrity in America's Middle Schools.

    ReplyDelete
  7. You can do some math on these curves to show that fully 33% of the eighth grade is performing at a level that 95% of the third grade reached. 29% of the 7th grade, 51% of the 6th grade, 83% of the 5th grade, and 91% of the 4th grade are mixing it up in their respective curricula with the same skills that 95% of the 3rd grade has.

    This paragraph looks backwards to me. If the 33rd percentile of 8th grade is at the same point as the 95th percentile of 3rd grade, that means that the remaining 5% of the 3rd graders are doing better than that 33% of 8th graders. The point at which "95% of 3rd graders" performed is the 5th percentile at the left end of the bell, not the 95th percentile at the right end of the bell. Ditto for the description of performances by the other grades.

    Still, point taken. The graph does flatten out over the years, indicating that the right end of the bell moves faster than the left. The troublesome part is that the left end doesn't seem to move at all, indicating that the lowest performing eighth graders are performing at the same level as the lowest performing third-, fourth-, etc. graders.

    ReplyDelete
  8. The elementary to middle school gap has two independent effects. One is that in our district we have Investigations for K-5 and CMP in 6-8. You might go wild on that, presuming that it portrays something about those two products. I think it's more to do with the nature of the test scoring. NWEA is given nationally to millions and millions of students.

    This means their adaptive test methodology and subsequent scoring reflects, to some degree, a national curricula bias. That is, if my state's 5th grade curricula is somehow radically different than the national mean, then my scores could show that as a discontinuous jump. Maybe all of a sudden we 'fit' the norm better in 6th so we jump up.

    I'm not too sure about this because NWEA doesn't publish national data (to my knowledge). The only 'national' data I have is that the norm for the shift in mean is about 8 points each year. Our kids exhibit about 6 points if you average out the shifts.

    Also with respect to learning rates. I disagree with the premise that demands for mastery don't affect rates. If you plunked them into a traditional setting that's likely a decent presumption. But for me, mastery MUST be coupled with remediation, i.e. if you failed to master something at the appropriate milestone then you automatically get some sort of remediation (more than a do over) and that will, by definition, be an adaptation to the child's (apparently) slower learning rate.

    You can't have one without the other! Kids absolutely learn at different rates if you keep the input constant.

    From Marzano... "Mastery of standards is the constant and time is the variable."

    Also be careful about the numbers quoted. 95% of children in the third grade is not the 95th percentile of the third grade. The 95% of children thingy is plus or minus 2 standard deviations (about the mean) in their distribution.

    It is a given in a normal distribution that this area encompasses 95% of a population. It's an area for a range. To read these curves as percentiles you have to consider a range which (theoretically) starts at negative infinity and proceeds up to the desired percentile value.

    To do the math I had to define what 3rd grade meant (since it's a range). 95% of the third grade includes kids up to plus 2 standard deviations. 2.5% of them are off the charts on the low side and 2.5% of them are above that cohort. This point turns out to be about 207 as a cut point. If you are in any other grade and below 207 you would be considered (by my definition) to be in the third grade.

    Your correct that this is arbitrary and of course some 3rd graders are way below the median for their grade. I wasn't clear enough about that and I didn't mean to imply that all third graders are stuffed up against that magic 207 value. I just needed some sort of definition to advance the discussion.

    I also think that DI experience refutes, to a degree, the arguments about rates. I seem to remember that their research shows that demanding mastery actually narrows learning rates. This would seem to imply, at least to my puny brain, that rates are derivative of the lack of mastery in some fashion.

    This makes a lot of sense to me because if you aren't being taught in your ZPD (you're missing stuff that you need), then your short term memory is burning lots of carbs looking for things it's not going to find and this slows you down on the new stuff.

    By the way, DI is a system that demands mastery before proceeding. What's often missed in the mastery discussion is the 'when'. I believe in DI the 'when' is every day, every lesson. In public ed, if it's demanded at all it's annually.

    Maybe a DI expert could weigh in on this.

    ReplyDelete
  9. >There's no data to show that anything you do will ever make the slow kids into fast kids. (If there's data >showing that you can ceteris paribus, increase a kid's rate, on the margin, I'd love to see it.)

    The point about rate is valid, but only if you assume no intervention (a safe assumption in most school learning situations). There is data that you can improve *individual* children's rate of learning in specific areas. The general rule of thumb is that the rate of improvement will be about x2 : i.e., if a certain concept or discrimination required 100 trials to master, the next similar learning task will take 50 trials, then 25, etc up to an individual's ceiling. This is nicknamed the "learning curve," although on a semilog graph it will be a straight line. Individuals do have ceilings -- in this as in most things. Learning rate, like IQ, is a *range* rather than a point, and something that is modifiable by environmental variables.

    Several early studies (1960's-1970's) were with severely handicapped children who needed THOUSANDS of repetitions to learn something. Their learning rate improved, under experimental conditions, to a much more normal (but still slow) rate. Thus, you will never turn a tortoise into a cheetah, but the tortoise might end up similar to an average dachshund. I have the cites on some of those articles but they are very recherché items in Sp. Ed. journals and not easy to look up. I'll see whether Catherine can run them down for me. Engelmann was a partner in some of them.

    With more "normal" learners, there is lots of data in the PT literature about improvement in learning rate. The Great Falls project (a school implementation of rate-building instruction in North Dakota in the 70's or 80's) collected quite a lot of data on this. The students' abilities generalized to many areas of the curriculum and the project was finally pulled, in part because too many kids ended up being classified as "gifted." One of the project managers was Ray Beck, who subsequently (and perhaps still) went to work for Sopris West. You could consult him for data on the project -- I know it has been published, but I don't know where.

    PT practioners and their clinics also have a great deal of data on individuals and their rate of learning improvements. You could consult Elizabeth Haughton, the Center for Advanced Learning (CAL) in Las Vegas (can't remember the name of the person in charge), Dr. Kent Johnson at Morningside, Dr. Joe Layng who implemented many rate-building procedures both with stroke victims and with low-SES college students in his years in Chicago, Dr. Carl Binder and others for specifics of publications, data, etc. There's plenty out there, but you have to try hard to find it since it is in opposition to mainstream thinking.

    Since my concern is with teaching students, not with convincing naysayers (I ignore them), I have made use of the lessons from these people to successfully get "hopeless" cases -- not only "dumb" kids but kids who learn at a slow rate -- functioning at higher levels so that they can compete in the mainstream successfully. Very few end up at the top of the heap (only one so far), but numerous others end up in the average range and that's what excites me, since they were initially regarded as too dyslexic, too stupid, too slow, too disadvantaged, too whatever, to succeed. At least once or twice a year I hear from one who has finished university, started a business, is successful in some field of study or application that I would never have thought of when he/she was a student (designing solar -powered homes in one case, marine biology in another, co-ordinating seniors' services for a community agency in a third). The point is that these students' can learn effectively (with DI, among other things) and their rate of learning can be maximized so that they are empowered to achieve meaningful goals.

    In a perfect DI environment, you do periodically regroup students based on rate, but rate-building per se is not something DI people typically focus on, and its largest application has been with people in Special Ed populations (autism, developmental disabilities, LD) For people currently doing cutting edge work in the field, google Michael Fabrizio, Alison Moors, Richard McManus, Kent Johnson (I can probably think of others).

    It's hard work, but it can be done. Will it be done routinely in the school system? Not in our lifetime I suspect.

    ReplyDelete
  10. Vicki and Allison --

    There is some intractable difference in learning rates between "slow" and "fast" learners. But that difference is greatly magnified by placement in class/curriculum for which the learner is not prepared because they missed, for whatever reason, the learning on which the current instruction is supposed to stand. Think how much slower we might learn fourth year French if we had not taken third year French; compare that to how quickly we would learn if we had been correctly placed in third year French. The same dynamic applies to learning at the K-3 level, and for all subjects.

    ReplyDelete
  11. The kids who do well in my son's school aren't just good students, they are well supported by their parents, and I'm not talking about just making sure that homework gets done. When the honor roll came out, I saw a correlation with the parents. I'm a big advocate of looking at individual cases to see exactly what is going on. Unfortunately, many disregard this as useless anecdotal evidence only.

    I don't debug my programs using statistics. I find a clear error and work backwards to fix the problem. I know that there may be many other errors in the program, but at least one is fixed. I completely understand the error and fix it.

    Statistics assumes that there is some commonality between individual cases or anecdotes. Conversely, by studying individual cases, you can assume that the solution would apply to many more individuals. In large programs, with potentially many errors, you can't possibly solve the problems top-down (guess and check). The errors can interact in many devious ways. You have to pick apart each error from the bottom up until they all have been solved, or at least, understood.

    Statistics also cause people to look in the wrong direction. Perhaps you can pick up relative changes, but you can completely miss huge absolute problems. I've been in meetings where parents and teachers study state test results very carefully. They never looked at the actual questions on the test and wondered "what the hell is going on"?

    ReplyDelete
  12. Anonymous...

    Exactly right. Today, time is the constant and mastery is the variable.

    The purpose is to get all the fuel into the plane, not to make it fly. Instead of driving each kid to their personal best, the goal is to demonstrate that they've seen it go by.

    Hey! Is that a 21st century poem?

    ReplyDelete
  13. SteveH:

    Statistics aren't about the individual at all, that's true. You can't say anything about an individual by studying a group. Parents have an obligation to do what's best for their individual offspring. School systems have a responsibility for thousands upon thousands, schools for hundreds, and teachers for dozens.

    As a teacher I have a responsibility for all of my kids. At times I ache to take one aside and do something for them as an individual but I also know that in so doing I'm leaving 65 others by the side of the road. Does a parent do that? No, they do what's best for the entire family.

    One day I had a really bad class. Nothing went right, behavior was off the charts. I threw down my marker with about 10 minutes to go and just gave up. I was literally seconds away from walking out the door. Suddenly, out of the unearthly silence, one of my lowest performers (and liveliest challenges) said, "Are you giving up on us Mr B?". I walked out of the school that day hiding my watering eyes. By the time I got to the car I was crying like a baby, ashamed of my performance and lack of will.

    If you don't have an experience like that as a teacher at least once a year, your not trying hard enough. I know teachers who buy clothes for their kids, do their hair, drive them into our district illegally to provide some stability, and all manner of things they'd get fired for if they got caught.

    I hear you. But trust me, you can't turn an ocean liner with one oar. That doesn't mean the rowers should stop trying. But while you man your oar, others have to get into the wheel house. Each of us in our own way is trying to accomplish the same change of course.

    ReplyDelete
  14. wow - great post!

    I need to get ALL of these comments "up front...."

    or catalogued somewhere

    I wonder whether the question of how much faster an initially slow learner can become is still open:

    Schemas and Memory Consolidation

    We now report, however, that systems consolidation can occur extremely quickly if an associative "schema" into which new information is incorporated has previously been created....Schemas also played a causal role in the creation of lasting associative memory representations during one-trial learning.

    (The subject in this experiment were rats.)

    another post on changes in speed of learning:

    on not teaching to mastery

    ReplyDelete
  15. As to statistics versus case histories, only recently have I begun to understand what Steve means by this (I think).

    Ed and I have been burned by "data" so many times now that I'm extremely leery of any use of data by my district, at least when it comes to individual children.

    A recent example: a presentation to the school board in which the presenter claimed that we now have data-analysis software that allows teachers to "prove" that what they are doing is working. That was the word: "prove."

    What's happening now, in public schools, is that districts are buying or licensing data warehousing / data mining software and handing it over to people with no training in math or statistics.

    They plug in whatever values the software asks for and the software spits out "proof."

    ReplyDelete
  16. The point of looking at the dispersion over time is not to highlight learning rate differences. Of course kids learn at different rates and although it would be a nice goal to close the rates that's not the point.

    The point is that, GIVEN differing rates, if you put kids in a system that does NOT acknowledge those differences, then over time dispersion is aggravated. Teachers in middle school are then faced with the daunting task of having a 4 or 5 grade spread in their classes. A saner system would acknowledge, indeed encourage, rate differences;if you can go fast then fine we'll push you ahead, if you go slow then we won't keep putting you in classes where you aren't in your ZPD, if you have learning disabilities then we will provide accommodations. If such treatment closes rate gaps, great. If it doesn't, who cares.

    My premise is that you simply cannot address rates at all in a system that groups kids by non-academic criteria. And I agree that you can't use this data to move individuals or fix anything in situ. I would oppose an effort to move kids rates inside the present structures. The structure is the problem.

    The goal should be that each child is able to work in their ZPD and that determination is an ongoing relentless piece of the puzzle. Where kids get placed should not be a function of their age, shoe size, performance in another system, or what it says in their IEP. Ideally, I'd move the goal posts everyday driven by individual measures of academic performance.

    This data was taken from a system that does absolutely nothing (systemically) to address rate differences except the catchall, differentiation. I propose that the data is evidence of the brokenness of that system, nothing more. It's not an argument to close rates. It's an argument to make the system aware of the fact that they exist and that they are a problem that needs to be addressed by something more than pixie dust.

    Time in the system needs to become a variable. Mastery needs to become a constant.

    ReplyDelete
  17. I am very bothered that a treatise on mathematics misinterprets the percentiles so badly.
    After spending days on explaining the use of median income and skew to my high school students, and that the median is the 50th percentile, lo! the newspaper published data on income by state,(provided by the government accounting office) and printed that 89% of our state had an income below the median. Frankly, it takes a lot of faith for students to continue to believe what I tell them when they are confronted by "data interpretation" such as this.
    I follow the spelling rule - sure, typos are okay for contributions, but when I find a website with spelling errors and typos, I assume it's all rubbish.

    ReplyDelete