kitchen table math, the sequel: state tests
Showing posts with label state tests. Show all posts
Showing posts with label state tests. Show all posts

Tuesday, April 2, 2013

uh oh

A note from palisadesk:
Unfortunately, it is true that using Lucy Calkins' methods can raise test scores, due to the design of the current generation of "authentic assessments" (aka holistic assessment, standards-based assessment, performance assessment). I know several schools (including my own) where test scores rose substantially when they STOPPED doing systematic synthetic phonics and moved to a workshop model instead.

So to prove the instructivist stuff works you also need to have in place testing that assesses actual skills -- phonemic decoding, vocabulary, grammar, spelling, arithmetic, etc.

It's really not all that unbelievable, if you consider how the testing has 
changed. Schools used to use norm-referenced measures (like the IOWA, the
 CTBS, Metropolitan Achievement Test, etc.) which also have definite 
limitations, but different ones.

Once they replaced those (as many states
 have done) with "constructed-response" item tests, variously known as 
performance assessments, holistic assessments, standards-based assessments 
and so on, a more fuzzy teaching approach also yielded benefits. These 
open-response items are usually scored on a rubric basis, based on anchor 
papers or exemplars, according to certain criteria for reasoning,
 conventions of print, organization, and so forth. These are variously 
weighted, but specifics like sentence structure, spelling, grammar,
 paragraph structure etc. generally carry less weight than such things as
 "providing two details from the selection to support your argument."

The 
open responses often mimic journal writing -- it is personal in tone, 
calls for the student to express an opinion, and many elements of what we
 would call good writing (or correct reading) count for little or even 
nothing.



The same is true in math. A local very exclusive private school which is 
famous for its high academic achievement recently switched from 
traditional math to Everyday Math and saw its test scores soar on these assessments (probably not on norm-referenced measures, but they aren't
 saying).



Another school where I worked implemented good early reading instruction 
with a strong decoding base (and not minimizing good literature, either),
 but saw its scores on the tests go down almost 25%. I think the reason
 for that is that teaching children to write all this rubbish for the "holistic assessments" is very time consuming, and if you spent your 
instructional time teaching the basic skills -- which aren't of much value 
on these tests -- your kids will do poorly.



So yes, you can post [my email], not referring to me of course. You can say -- 
because I don't think I've mentioned it publicly anywhere -- that I have 
been involved in the past in field-testing these assessments so have a 
more complete picture of how they are put together and evaluated, and what 
they do and do not measure.

Different states have made up their own but 
they share many similarities.
I was surprised when I read this ..... somehow I had assumed that, basics being basic, absence of basics would make any test hard to pass.

Apparently not.

Tuesday, March 20, 2012

mixed-level classrooms in high-SES districts

During her workshop on "Project Based Assessment," one of the presenters at the misnamed "Celebration of Teaching and Learning" made a comment that struck me.

She said she had been a teacher in an affluent community, and as a result she had had an extremely wide variation in her students' instructional levels. The reading levels in her 6th grade class ranged all the way from first grade to college level.

That enormous range made group projects more or less a requirement, in her view. When students whose reading level ranges from 1st grade to college work together, the differentiation handles itself. (Her actual words may have been: "They differentiate themselves," perhaps. Unfortunately, my iPad erased my notes.)

It had never occurred to me that affluent districts might have substantially more variability inside the classroom than less affluent districts, though as I think about it, it makes sense.

If it's true that teachers routinely see extremely wide variation in ability and instructional level in high-SES schools, this is another case where the use of group means as the only measure of success is particularly deadly in affluent districts.

Monday, March 21, 2011

question about value-added measurement

My copy of the Harvard Education Letter arrived with an excerpt from a new book on value-added measurement. I have a question about this passage:
Misconception 2: Value-added scores are inaccurate because they are based on poorly designed tests. Most standardized tests are indeed flawed, but this is not a problem created or worsened by value-added.
Seven Misconceptions About Value-Added Measures
by Douglas N. Harris
Harvard Education Letter - March|April 2011
p. 8
Is that correct?

As I understand it, New York's state tests can't be used for value-added purposes. The tests are shorter in some years, longer in others, and somehow don't correspond to a one-year measurement of learning. Or so we were told. Certainly they don't provide any sense of where a child might be within a year's worth of content. New York tests are scored 1 to 4, so if your child scores a middling 3, what does that mean on a scale of 10 months? Nobody knows.

I had been assuming that in order to use a standardized test as a value-added measurement, the tests had to be normed month-by-month as the Iowa Test of Basic Skills is normed:
Grade Equivalent (GE)
The grade equivalent is a number that describes a student's location on an achievement continuum. The continuum is a number line that describes the lowest level of knowledge or skill on one end (lowest numbers) and the highest level of development on the other end (highest numbers). The GE is a decimal number that describes performance in terms of grade level and months. For example, if a sixth-grade student obtains a GE of 8.4 on the Vocabulary test, his score is like the one a typical student finishing the fourth month of eighth grade would likely get on the Vocabulary test. The GE of a given raw score on any test indicates the grade level at which the typical student makes this raw score. The digits to the left of the decimal point represent the grade and those to the right represent the month within that grade.
When your child takes the ITBS from one year to the next, it's simple to see whether he's made a year's progress in a year's time. If, at the end of grade 3, he scored a 3.10 on computation (grade 3, month 10), he should score a 4.10 on computation at the end of 4th grade.

But how would you make that determination using the New York tests?

Or is there some other comparison you make from year to year?

Sunday, September 12, 2010

Another article (or two): Is it bad to test students frequently?

Two recent New York Times articles fly in the face of conventional pop psychology and education theory.

An article in last week's Science Section on study habits cites cognitive science research indicating that the act of taking a test can enhance learning:
The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.
As Henry L. Roediger III, a psychologist at Washington University in St. Louis puts it, “Testing not only measures knowledge but changes it.”

Next we have a front page article in this weekend's Week in Review on Testing, the Chinese Way, written by Elisabeth Rosenthal, whose children spent a year at the International School of Beijing where "taking tests was as much a part of the rhythm of their school day as recess or listening to stories." Citing personal experience, Rosenthal argues that:

>Young children aren't necessarily aware that they are being "tested."

>Frequent tests give children important feedback about how they are doing.

>Frequent tests offer a more meaningful way to improve self-esteem than frequent praise does.

On this past point, Rosenthal cites Gregory J. Cizek, a professor of educational measurement and evaluation at the University of North Carolina at Chapel Hill:
Professor Cizek, who started his career as a second-grade teacher, said the prevailing philosophy of offering young children unconditional praise and support was probably not the best prescription for successful education. “What’s best for kids is frequent testing, where even if they do badly, they can get help and improve and have the satisfaction of doing better."
Cizek's overall take on testing in schools? “Research has long shown that more frequent testing is beneficial to kids, but educators have resisted this finding:”

Rosenthal concludes on a particularly powerful note:
When testing is commonplace and the teachers are supportive — as my children’s were, for the most part — the tests felt like so many puzzles; not so much a judgment on your being, but an interesting challenge. It is a testament to the International School of Beijing — or to the malleability of childhood memory — that Andrew now says he did not realize that he was being tested. Will tests be like that in a national program, like Race to the Top?

When we moved back to New York City, my children, then 9 and 11, started at a progressive school with no real tests, no grades, not even auditions for the annual school musical. They didn’t last long. It turned out they had come to like the feedback of testing.

“How do I know if I get what’s going on in math class?” my daughter asked with obvious discomfort after a month. Primed with Beijing test-taking experience, they each soon tested into New York City’s academic public schools — where they have had tests aplenty and (probably not surprisingly) a high proportion of Asian classmates.

Wednesday, July 14, 2010

more from New York

A new report is spurring New York state to overhaul the way it defines academic proficiency for public-school students -- a massive change that could suddenly label tens of thousands of kids as being below grade level.

[snip]

Among older kids, the study found that those who did just well enough on their high-school math Regents to graduate -- scoring at or slightly above the passing grade of 65 -- had a less than 5 percent chance of getting placed in the easiest for-credit math course offered to CUNY freshmen.

Hiking class standards
By YOAV GONEN, Education Reporter
New York Post
Posted: 4:39 AM, July 14, 2010

Thursday, July 8, 2010

score inflation in NY

Over the last few years, student performance has soared on math and English tests across New York State, with the most dramatic improvements evident in urban districts such as Buffalo, leading many to celebrate the progress.

But now, state education officials say the progress may not have been quite what it seemed.

Weaknesses in the state’s testing and scoring systems over the last several years created what Education Commissioner David M. Steiner equates to systemic “grade inflation.”
  • Students who score at the “proficient” level in middle school math, for instance, stand only a 1-in-3 chance of doing well enough in high school to succeed in college math, he said.
  • Students begin getting “inflated” test scores before they hit high school, state officials said. A student who scores a 3 on a state math test — which is considered “proficient” on the scale of 1 to 4 — stands only a 30 percent chance of getting an 80 on the high school Regents math exam, they said.
  • ...a student who scored at the proficient level on a state test in 2006 was in the 45th percentile on the national test, meaning that 55 percent of students in the country scored better. In 2009, the same score on the state test would land a student in the 20th percentile on the national test, meaning that 80 percent of students nationwide scored better.

The state Education Department recently asked a group of experts, led by Harvard University’s Daniel M. Koretz, to determine how closely eighth-grade scores correlate to high school Regents exam scores — and how well those Regents exam scores correlate to success in college.

Flawed tests distort sharp rise in scores by students
By Mary B. Pasciak
Updated: July 06, 2010, 11:42 pm / 19 comments
Published: July 07, 2010, 6:35 am

Saturday, July 12, 2008

state standards report card

Which States Have World-Class Standards and Which Do Not

New York gets a C+.

8th grade standards in reading are declining, which makes sense, I think. I remember reading a while back that 8th grade standards were the only "hard" standards, which I know is the case here in New York. The former assistant superintendent told parents that the reason our 8th graders weren't doing well on the 8th grade ELA test was that the test was "unnecessarily difficult."

You can probably see the 8th grade jump in difficulty for the top states here. The 4th grade tests get lower grades than the 8th grade tests.

Prior to NCLB being enacted, states were required to test students in the 4th & 8th grades. There were no penalties for students failing these tests, which were strictly informational.

I assume that once NCLB was passed states in some way "kept" their old 4th and 8th grade tests, while writing new tests for the rest of the grades. The problem was that NCLB testing was high stakes; there were consequences attached.

So the 8th grade tests are being brought in line with the easier NCLB-era tests.

That's my suspicion, at any rate.