kitchen table math, the sequel: uh oh

Tuesday, April 2, 2013

uh oh

A note from palisadesk:
Unfortunately, it is true that using Lucy Calkins' methods can raise test scores, due to the design of the current generation of "authentic assessments" (aka holistic assessment, standards-based assessment, performance assessment). I know several schools (including my own) where test scores rose substantially when they STOPPED doing systematic synthetic phonics and moved to a workshop model instead.

So to prove the instructivist stuff works you also need to have in place testing that assesses actual skills -- phonemic decoding, vocabulary, grammar, spelling, arithmetic, etc.

It's really not all that unbelievable, if you consider how the testing has 
changed. Schools used to use norm-referenced measures (like the IOWA, the
 CTBS, Metropolitan Achievement Test, etc.) which also have definite 
limitations, but different ones.

Once they replaced those (as many states
 have done) with "constructed-response" item tests, variously known as 
performance assessments, holistic assessments, standards-based assessments 
and so on, a more fuzzy teaching approach also yielded benefits. These 
open-response items are usually scored on a rubric basis, based on anchor 
papers or exemplars, according to certain criteria for reasoning,
 conventions of print, organization, and so forth. These are variously 
weighted, but specifics like sentence structure, spelling, grammar,
 paragraph structure etc. generally carry less weight than such things as
 "providing two details from the selection to support your argument."

The 
open responses often mimic journal writing -- it is personal in tone, 
calls for the student to express an opinion, and many elements of what we
 would call good writing (or correct reading) count for little or even 
nothing.



The same is true in math. A local very exclusive private school which is 
famous for its high academic achievement recently switched from 
traditional math to Everyday Math and saw its test scores soar on these assessments (probably not on norm-referenced measures, but they aren't
 saying).



Another school where I worked implemented good early reading instruction 
with a strong decoding base (and not minimizing good literature, either),
 but saw its scores on the tests go down almost 25%. I think the reason
 for that is that teaching children to write all this rubbish for the "holistic assessments" is very time consuming, and if you spent your 
instructional time teaching the basic skills -- which aren't of much value 
on these tests -- your kids will do poorly.



So yes, you can post [my email], not referring to me of course. You can say -- 
because I don't think I've mentioned it publicly anywhere -- that I have 
been involved in the past in field-testing these assessments so have a 
more complete picture of how they are put together and evaluated, and what 
they do and do not measure.

Different states have made up their own but 
they share many similarities.
I was surprised when I read this ..... somehow I had assumed that, basics being basic, absence of basics would make any test hard to pass.

Apparently not.

10 comments:

Glen said...

Yet another very interesting contribution from palisadesk.

I've seen this issue in multiple domains, where people seemed to do just fine without much foundation underneath their skills.

I'm still often surprised by how many professional programmers are former English majors who can't divide 1/4 by 2/3 to save their lives but seem to do fine at their programming jobs. They learn how to use the systems they work with and do similar tasks over and over, occasionally googling for some online code samples, which they modify for their purposes. It mostly works, and I find myself having to stretch uncomfortably, and often unsuccessfully, to come up with examples of why their lack of what I consider foundational skills matters in the "real world."

I see this in math, too, where countless people rely on memorized procedures instead of conceptual understanding. It mostly works for most people. If there's something you can't remember, you really can often find something online that will show you how or do it for you.

And in music. I'm no musician, but I remember in high school just sitting down with a guitar and some friends, copying their movements to learn a song, practicing it incessantly, and ending up playing it pretty well. I did this for a lot of songs. I eventually realized that I was not really getting any closer to being able to produce anything that wasn't a very close copy of someone else---what song writers were doing to invent the music I was copying was a total mystery to me---but I also observed that there were a lot of professional musicians who seemed to do quite well just doing what I was doing. (I wasn't satisfied with a type of progress that seemed to get broader but no deeper, didn't know what to think about it, concluded that I probably lacked some sort of deeper talent, and gave it up.)

And in a foreign language. I've had several interesting experiences with people who had just learned by communicating ("the modern, natural way") and seemed like near natives in proficiency but whose apparent nativeness collapsed surprisingly when something forced them out of their well-practiced comfort zones. But they really did quite well in daily life.

I wish there were an easy and persuasive demonstration of the value of building up from foundational knowledge and skills, but people who go right to the task at hand rather than building up to it can be surprisingly successful. Tests that evaluate their "real world" abilities can find them quite competent, and tests that challenge their foundational skills can easily be derided as "artificial."

MagisterGreen said...

Well, when the tests aren't looking to test anything basic, the lack of understanding of the basics is no handicap.

Unfortunately what this approach has to lead to, ultimately, is a complete ossification of society. Lacking a command of the basics must lead to a lack of ability to deal with novel situations, situations in which one's pre-fabricated responses are inadequate. As such I fail to see how innovation can continue, beyond those ever-so-few geniuses who do understand the basics and thus can see beyond the immediate into the potential.

Teaching the basics is hard, and testing for them is even harder since you often can't test for the basic concept itself but rather are testing for evidence that understanding of the basic concept exists.

So long as nothing in our world ever changes or otherwise requires anyone to work beyond their pre-determined zone of competence, testing and teaching of this sort will do just fine. The depressing thing is that your average medieval peasant had a more thorough and basic education than our kids will receive under this new system.

SteveH said...

In our current holistic state tests, they don't directly test the basics. They use authentic problems and figure out how to divide each part of a problem into areas like number sense and problem solving. So, instead of getting direct feedback telling a school to work more on two-digit multiplication, they get a vague number that talks about number sense. I distinctly remember a parent/teacher meeting that was looking at our drop in problem solving. The number went down, but we had no understanding of what that number meant. Is it a systemic problem or a small relative problem? It couldn't be a systemic problem because our numbers were pretty good compared to other schools in the state. Right. So, what was the solution? Work harder on problem solving. I guess they just sent a memo to the teachers. If the test was strictly a test of basics, then we would know which teachers seem to be good at teaching very specific skills. We would know exactly what to fix. This won't happen if teachers think that skills are rote.

It's like bad partial credit. If you give enough partial credit in math, you can get an 'A' without ever getting one problem correct.

I see this attitude in the new PARCC test we are going to be using. The fuzziness and low expectations continue. I expect there will be a conflict between that test and ones that will be aligned with the ACT and SAT.


The problem is that many K-8educators don't value basic skills. They want a test that takes a holistic approach where you can somehow do well even without mastery of the basics. In math, this attitude goes away by high school, but the damage has been done. Somehow, reality has to be pushed down into the pedagogical dreamworld of the lower grades. The ACT and the College Board have the credibility and clout to do this, as long as they don't wimp out on directly testing basic skills.

SteveH said...

BTW, our scores went up when our schools switched to Everyday Math. Then again, they were using MathLand before and we're only talking about small relative changes in a world where there is a huge systemic problem.

Our state uses a fuzzy test, but still it changes a raw percent correct score into a score between 20 and 80. Then it defines a low proficiency cutoff point and comes up with a proficiency index for the percent of the students who get over that cutoff. Then, because these numbers still look stinking bad, our schools talk about their ranking in the state.

If some numbers look bad, they really can't define the problems, and they don't know how to fix them. Holistic tests do not provide useful feedback even if you believe in the results.

Anonymous said...

Our state (MN) tests have constructed responses too, and again, Everyday Math did better on some scores of some of those tests because now students are given credit for "explaining their thinking". in words. The ELL kids got hammered.

but you can only fake reality for so long. Sooner or later, someone wants you to actually know something, not solve-by-google. Yeah, Glen, it seems to work. Then again, looking at the PRC copy a design for a jet through theft seems to work too. Just don't fly it.

We have eaten the seed corn.

The TIMSS scores are instructive. I will post on our "high scoring" state TIMSS performance tomorrow.

SATVerbalTutor. said...

Even on the SAT, it's perfectly possible to score very well on the essay without having anywhere near mastered the basic conventions of correct writing. (If you're curious, go on College Confidential and read some of the "12" essays that people have posted -- they're utterly horrifying). If Coleman does one thing when he revamps the SAT, he needs to fix the criteria for scoring the essay. No one who clearly doesn't understand what a sentence is should be allowed to get a top score. But it's interesting: you see a lot of kids who can score super-high in Math and Writing and then just fall down on CR. They can memorize formulas and apply them to a certain point, but when so many elements are in play at once, they're in way over their heads. Very often these kids will complain that they get down to two answers "but always guess the wrong one" when in fact they have no real understanding of what the passage was actually saying (as evidenced by the fact that one of the answers is exactly the *opposite* of the point of the passage).

Even the French AP has been revamped to focus more on holistic "real world" skills; the grammar-based format of the old test was apparently just too hard. The result is that kids can bullshit their way through, and even ones who have no real understanding of the language can pass.

Robin said...

Steve-on your comment about aligning with the ACT and SAT, David Coleman has said the SAT will be redone to emphasize College and Career Ready Skills. An amorphous phrase that I have tracked back to David Conley and Gates funding. ACT has come out with a new assessment called Aspire that numerous people have contacted me with concern about to see if I thought it was OBE reincarnated. Short answer is yes.

I have also tracked down Pearson admitting that PARCC and SBAC are assessing 21st century skills using ill-defined problems to encourage group collaboration since there is no fixed answer. Rubric.

I also have the Hewlett Foundation hiring UCLA based CRESST to confirm that these assessments are measuring emotionally grounded deeper learning.

http://www.invisibleserfscollar.com/throwing-an-invisibility-cloak-over-the-classroom-to-get-to-deweys-participatory-social-inquiry/ explains it.

NAEP apart from the long term component also works as Palisadesk described. In fact that is what it is measuring compliance with the constructivist vision. And PISA is so constructivist that it is even based explicitly on checking for desired Competencies-a mixture of generic skills coupled to attitudes and values.

Buckle Up. Stormy seas ahead in ed world.

SteveH said...

The PARCC people define the "strong" PLD math level 4 of the PARCC test to mean that 75 percent of those students would pass a college algebra course. Their top "distinguished" level 5 only means that it is likely that you would pass the same course. In the PARCC PLD level document, there is no mention of STEM, and their test will provide no feedback on how to prepare kids to reach that level. They talk about calibrating the PARCC test to the ACT and SAT, but how could they do that when their top "distinguished" level is so low?


I also see no indication that the ACT and SAT tests will evolve to become more holistic. I see only that the SAT might evolve to look more like the ACT because they are seeing more kids take the ACT. This doesn't mean that the developers of their lower grade tests won't drink from the same holistic Kool Aid cup, but they have to show a continuity of scores through to their high school ACT or SAT tests. And, if the College Board pushes their Pre-AP program back to the early grades, they will have to show some correlation between tests. They might see results flip between K-8 and high school if they use holistic testing. I would like to see them try to do a Pre-AP math class using holistic ideas and testing.

SteveH said...

CCSS leaves a lot of room open for interpretation and calibration of different performance levels. We are starting to see the details with PARCC and it's not good. The expectations are low and they ignore STEM preparation. Their final top goal of no remediation in college algebra starts in the earliest grades. This guarantees that STEM-ready students will have to continue to get help outside of school in K-8. Students and parents are misled with performance levels called "strong" and "distinguished".

Here is ACT's longitudinal benchmarks for math to get to college algebra:

http://www.act.org/solutions/college-career-readiness/college-readiness-benchmarks/

"The benchmarks are scores on the ACT subject-area tests that represent the level of achievement required for students to have a 50% chance of obtaining a B or higher or about a 75% chance of obtaining a C or higher in corresponding credit-bearing first-year college courses."

[This seems to be equivalent to PARCC's PLD level 4 for college readiness, called "strong".]

Explore Grade 8 - 17
Explore Grade 9 - 18
PLAN Grade 10/11 - 19
ACT Grade 11/12 - 22

[That's an interesting jump between PLAN and ACT.]

At least it's calibrated to the known quantity of the ACT, and people know what an ACT score of 22 means. It's neither "strong" or "distinguished".

One issue is that they don't define this sequence below 8th grade. Another issue is that you can get a very good ACT math score, but still not be properly prepared for a STEM program in college. But what about K-7?

I found this comment:

"Beginning in the earliest grades, ACT assessments track and monitor not just readiness but behaviors and goals—to help teachers, parents, and students correct course throughout the K-12 educational journey."

I haven't found what they have for "the earliest grades" yet. The problem is that for CCSS, college readiness really means no remediation and if you use this to define curricula and expectations in the lower grades, students will never get above that level without help from home.

I see it as a battle between statistical good versus individual good. Educators want a rising tide to float all boats, but nobody gets to fly - except for the kids who have high standards and help from home. High schools might have AP and IB classes, but the lower grades provide no help to get there. Parents care about individual good, but many urban parents are told that sending their kids to charter schools is really not fair. Apparently, it's fair for affluent parents to teach at home or send their kids to private schools.

Joy Pullmann said...

Thank you, thank you for posting this. Please continue to address this issue. I would like to learn more details about specific types of test questions geared towards constructivist learning, and now to compare those to more objective or knowledge-based questions, just so I can also look at a test and have some idea of its bent.