One of the goals of No Child Left Behind is to increase the availability of data. Part of the implicit model underlying No Child Left Behind is that with improved information, parents will recognize good and bad schools. Principals will identify good and bad teachers. District administrators will identify weak and strong principals, and state administrators will recognize struggling school districts. Armed with this information, parents will choose with their feet, and the other actors will undertake the necessary reforms to improve education.
As an empirical economist I am, of course, sympathetic to the use of data, and as a school board member I pushed for more thorough evaluation of our programs. But the gap between the rhetoric and the ability to use education data effectively is large.
Few school districts have the resources to analyze statistical data in even remotely sophisticated ways. In the early days of the Massachusetts Comprehensive Assessment System (MCAS) tests, I visited the Assistant Superintendent for Curriculum and Instruction who was anxious to use the testing data to help Brookline address its achievement gap. The state Department of Education had provided each district with a CD with the complete results of each student’s MCAS test. In principle, it would be possible to pinpoint the exact questions on which the gap was greatest. The problem was that no one in the central administrative offices could figure out how to read the CD. I loaded the CD onto my laptop and quickly ascertained that the file could be read with Excel. Shortly thereafter, our Assistant Superintendent attended a meeting of her counterparts from the western (generally affluent) suburbs of Boston and discovered that Brookline was the only system that had succeeded in reading the CD. Districts have become somewhat more savvy about using data. A younger generation of administrators has more experience with computers, but relatively few would be able to link student report cards generated by the school district with SAT scores and the state tests.
Principals, district administrators, and even state-level administrators generally begin their careers as teachers, and relatively few teachers have strong backgrounds in statistical reasoning. In my experience, the people who rise to senior administrative positions in public education are smart. They understand in a general sense that estimates come with standard errors attached, but faced with a report that last year 43 percent and this year 56 percent of black students in fourth grade were profifi cient in math, few could tell you whether with 75 students each year, the change was statistically signifificant.
When I stepped down from the school board, one of my colleagues joked that they could all go back to treating correlation as causality. In education policy settings, one repeatedly hears statements like: “Students who take Algebra II in eighth grade meet the profifi ciency standard in grade ten. We must require all students to take Algebra II in eighth grade.” “Students taking math curriculum A and curriculum B get similar math SAT scores. The curricula are equally good.” “Students who are retained in grade continue to fall further behind. Retention is a bad policy.”2
School administrators may understand at some level that they are only looking at correlations, but almost none have the training to address the issue of causality, and faced with a correlation, they will often interpret it causally in the absence of evidence to the contrary. The capacity to address causality, weaknesses of various measures, and other strengths and weaknesses of statistics is very limited. The Public Schools of Brookline recently recruited for a Director of Data Management and Evaluation. Although school board members generally are not (and should not be) involved in personnel decisions other than those involving the Superintendent, in this specific case the Superintendent asked me to participate in the candidate interviews. Many of the candidates held or had held similar positions in other districts. I asked each candidate how we could decide whether a math curriculum used by some, but not all, of our students was effective. Many of the candidates did not think of this question in statistical terms at all. Only one addressed the issue of selection—and we hired him.
Measurement Matters: Perspectives on Education Policy from an Economist and School Board Member (pdf file)
by Kevin Lang
Apparently they don't cover Excel in ed school.