Siegfried Engelmann: Then we became engaged in Project Follow Through. There were originally eighteen sponsors, about 500,000 kids, 180 communities, and pulled-comparison groups. It was supposed to be the definitive educational experiment. The idea was to work with kids in K-3 who had gone through Head Start.
Our students were first place in everything, but the reports were never really presented. We were first place in reading, first place in math, first in spelling, and first in language. And our kids had the most positive self image. Yet the report that APT Associates had developed, along with Stanford Research Institute, was just a summary of those reports.
David Boulton: The purpose of the study was to be able to see the effects of Head Start on education?
Siegfried Engelmann: Yeah.
David Boulton: Why was 'Follow Through' linked into Head Start in that way?
Siegfried Engelmann: Well, because it was originally an Office of Economic Opportunity project. And then it was taken over by the newly formed Federal Department of Education. It was designed to serve at-risk kids, disadvantaged kids, who had gone through Head Start. But that was just one of the requirements for the demography of the kids we were working with.
David Boulton: I see.
Siegfried Engelmann: A certain percentage of the students had to come from Head Start. Because Head Start was an obvious failure and they were concerned. It had no instructional component, and it was modeled after the middle-class preschool. While the middle-class preschool is probably okay for middle-class kids, the kids that we worked with were far behind in terms of language skills and...
David Boulton: So it was more concerned with creating parental freedom than it was in actually helping the children get ready for school.
Siegfried Engelmann: Right, yes. Anyhow, that made it a poor model for disadvantaged students. But fundamentally, Project Follow Through was designed to bail out Head Start. It was a horse race, the idea [of the APT reports] was to declare a winner or winners, those who produced the best results in K-3, to show that Head Start was not a total disaster.
David Boulton: How could it have done that unless it was also using a control group of kids that weren't in Head Start to show the advantages of Head Start?
Siegfried Engelmann: Well, they had that. They had a vast number of comparison groups. For each school that was involved, there was a comparison school. They weren't perfect, because the comparison schools tended to have higher socioeconomic ratings. They were not as disadvantaged. But, in addition to that, the data from all of the individual comparison schools were pooled. Then there was a certain non-disadvantaged mix as part of the formulated average school. So you had your non-disadvantaged population, and also (I can't remember the exact requirement) I think over 60 percent of the kids had to have gone through Head Start. But they had data on the Head Start kids and the non-Head Start kids. It was a very elaborate study. It cost, I don't know, hundreds of millions.
David Boulton: So 'Project Follow Through' was a prototype – a model that would later be followed in many ways by the National Institute of Child Health and Human Development.
Siegfried Engelmann: Right, right. The APT findings were suppressed largely for political reasons. In 1976 when Follow Through was being evaluated, Gene Glass, head of the Ford Foundation at the time, appealed to the National Institutes of Health with an incredible statement. He said something to the effect that, "The use of quantitative data is inappropriate and what we need is case studies. We need to document various aspects of the program so that informed consumers can make intelligent decisions."
And of course, it was total baloney. Wes Becker responded, with what I thought was an extremely succinct response, "As the problem with the disadvantaged is identified by data and scores; certainly the solutions to the problems would have to be manifested with data and scores.”
David Boulton: Certainly it all has to correlate somehow…
Siegfried Engelmann: [laughs] Yeah. They wanted to identify the problem qualitatively, and then solve it with methods that didn't generate any data. Becker also pointed out that if we're going to use case studies, how do we know we're using typical case studies unless we use some kind of intelligent sampling processes?
David Boulton: Yeah, and some common system of attributes that would allow you to scale through the data.
Siegfried Engelmann: Right. So, the net result was that the results of Follow Through were suppressed. The report that came out on Follow Through was that the project was a failure, which implies that all of the models were failures. And then they just rode off into the sunset with some kind of blazing saddles and that was that.
David Boulton: So buried in the dismissal of Project Follow Through as a whole, were the results it had gathered that showed the benefit of the work that you were doing, in contrast to the other systems or approaches that were compared.
Siegfried Engelmann: Yeah. I mean, it was the biggest part. But the suppression was intentional. It was contrived. It didn't just happen. The fact that the whole project failed, that the overall statements of the primary sponsors were true, did not necessarily mean that every one of them failed. That certainly was not the case.
David Boulton: And so the baby went out with the bath water there.
Siegfried Engelmann: Yes.
Tuesday, February 19, 2013
David Boulton interview with Engelmann re: Follow Through and Head Start