I found a nice little study comparing a fourth grade Direct Instruction math program with a well regarded fourth grade constructivist program. The results were surprising, to say the least. (Cross posted at D-Ed Reckoning.)The study Effective Mathematics Instruction The Importance of Curriculum (2000), Crawford and Snider,
Education & Treatment of Children compared the Direct Instruction 3rd grade math curriculum
Connecting Math Concepts (CMC, level D) to the constructivist fourth grade math curriculum
Invitation to Mathematics (SF) published by Scott Foresman.
Invitation to Mathematics (SF) SF has a spiral design (but of course). and relies on discovery learning and problem solving strategies to "teach" concepts. The SF text included chapters on addition and subtraction facts, numbers and place value, addition and subtraction, measurement, multiplication facts, multiplication, geometry, division facts, division, decimals, fractions, and graphing. Each chapter in the SF text interspersed a few activities on using problem solving strategies. Teacher B taught the 4th grade control class. He was an experienced 4th grade math teacher and had taught using the SF text for 11 years.
Teacher B's math period was divided into three 15-minute parts. First, students checked their homework as B gave the answers. Then students told B their scores, which he recorded. Second, B lectured or demonstrated a concept, and some students volunteered to answer questions from time-to-time. The teacher presentation was extemporaneous and included explanations, demonstrations, and references to text objectives. Third, students were assigned textbook problems and given time for independent work.
The SF group completed 10 out of 12 chapters during the experiment.
Connecting Math Concepts (CMC)CMC is a typical Direct Instruction program having a stranded design in which multiple skills/concepts are taught in each lesson, each skill/concepts is taught for about 5-10 minutes each lesson and are revisited day after day until the skill/concept has been mastered. Explicit instruction is used to teach each skill/concept. CMC included strands on multiplication and division facts, calculator skills, whole number operations, mental arithmetic, column multiplication, column subtraction, division, equations and relationships , place value, fractions, ratios and proportions, number families, word problems, geometry, functions, and probability. Teacher A had 14 years of experience teaching math. She had no previous experience with CMC or any other Direct Instruction programs. She received 4 hours of training at a workshop in August and about three hours of additional training from the experimenters.
Teacher A used the scripted presentation in the CMC teacher presentation book for her 45 minute class. She frequently asked questions to which the whole class responded, but she did not use a signal to elicit unison responding. If she got a weak response she would ask the question again to part of the class (
e.g., to one row or to all the girls) or ask individuals to raise their hands if they knew the answer. There were high levels of teacher-pupil interaction, but not every student was academically engaged. Generally, one lesson was covered per day and the first 10 minutes were set aside to correct the previous day's homework. Then a structured, teacher-guided presentation followed, during which the students responded orally or by writing answers to the teacher's questions. Student answers received immediate feedback and errors were corrected immediately. If there was time, students began their homework during the remaining minutes.
The CMC group completed 90 out of 120 lessons during the experiment.
The ExperimentDespite the differences in content and organization, both programs covered math concepts generally considered to be important in 4th grade--addition and subtraction of multi-digit numbers, multiplication and division facts and procedures, fractions, and problem solving with whole numbers.
Students were randomly assigned to each 4th grade classroom. The classes were heterogeneous and included the full range of abilities including learning disabled and gifted students. There were no significant pretest differences between students in the two curriculum groups on the computation, concepts and problem solving subtests of the NAT nor on the total test scores. Nor did any significant pretest differences show up on any of the curriculum-based measures.
The ResultsStudents did not use calculators on any of the tests.
The CMC Curriculum TestFor the CMC measure the experimenters designed a test that consisted of 55 production items for which students computed answers to problems, including both computational and word problems. The CMC test was comprehensive as well as cumulative; problems were examples of the entire range of problems found in the last quarter of the CMC program. Problems were chosen from the last quarter of the program because the various preskills taught in the early part of the program are integrated in problem types seen in the last quarter of the program.
The results here were not surprising, although the magnitude of the difference between the two groups may be.
The SF class averaged 15 out 55 (27%) correct answers on the posttest up from 7 out of 15 correct on the pre-test. The CMC class averaged 41 (75%) correct on the posttest up from 6 out of 15 correct on the pretest. I calculated the effect size to be 3.25 standard deviations which is enormous, though biased in favor of the CMC students.
The SF Curriculum TestThe SF test was published by Scott, Foresman to go along with the Invitation to Mathematics text and was the complete Cumulative Test for Chapters 1-12. It was intended to be comprehensive as well as cumulative. The SF test consisted of 22 multiple-choice items (four choices) which assessed the range of concepts presented in the 4th grade SF textbook.
The SF class averaged 16 out 22 (72%) correct answers on the posttest up from 4 out of 22 correct on the pre-test. However, surprisingly the CMC class averaged 19 (86%) correct on the posttest up from 3 out of 15 correct on the pretest. I calculated the effect size to be 0.75 standard deviations which is large, even though the test was biased in favor of the SF students.
You read that right, the CMC students out performed to SF students on the SF posttest.
The NAT exam and math facts testThe CMC group also scored significantly higher on rapid recall of multiplication facts. Of 72 items, the mean correctly answered in 3 minutes for the CMC group was 66 compared to 48 for the SF group for the multiplication facts posttest. I calculated the effect size to be 1.5 sd.
Posttest comparisons on the computation subtest of the NAT indicated a significant difference in favor of the CMC group. Effect size = 0.86. On the other hand, neither the scores for the concepts and problem-solving portion of the NAT nor the total NAT showed any significant group differences. The total NAT scores put the CMC group at the 51st percentile and the SF group at the 46th percentile, but this difference was not statistically significant.
DiscussionThe CMC implementation was less than optimal, yet it still achieved significantly better performance gains compared to the constructivist curriculum. The experimenters noted:
We believe this implementation of CMC was less than optimal because (a) students began the program in fourth grade rather than in first grade and (b) students could not be placed in homogeneous instructional groups. A unique feature of the CMC program is that it's designed around integrated strands rather than in a spiraling fashion. Each concept is introduced, developed, extended, and systematically reviewed beginning in Level A and culminating in Level F (6th grade). This design sequence means that students who enter the program at the later levels may lack the necessary preskills developed in previous levels of CMC. This study with fourth graders indicated that even when students enter Level D, without the benefit of instruction at previous levels, they could reach higher levels of achievement in certain domains. However, more students could have reached mastery if instruction were begun in the primary grades.
Another drawback in this implementation had to do with heterogeneous ability levels of the groups. Heterogeneity was an issue for both curricula. However, the emphasis on mastery in CMC created a special challenge for teachers using CMC. To monitor progress CMC tests are given every ten lessons and mastery criteria for each skill tested are provided. Because of the integrated nature of the strands, students who do not master an early skill will have trouble later on. Unlike traditional basals, concepts do not "go away," forcing teachers to continue to reteach until all students master the skills. This emphasis on mastery created a challenge for teachers that was exacerbated in this case by the fact that students had not gone through the previous three levels of CMC.
Why didn't the CMC gains show up on the NAT problem solving subtest and total math measure? The experimenters opine:
Our guess is that a more optimal implementation of CMC would have increased achievement in the CMC group, which may have shown up on the NAT. In general, the tighter focus of curriculum-based measures such as those used in this study makes them more sensitive to the effects of instruction than any published, norm-referenced test. Standardized tests have limited usefulness for program evaluation when the sample is small, as it was in this study (Carver, 1974; Marston, Fuchs, & Deno, 1985). Nevertheless, we included the NAT as a dependent measure because it is curriculum-neutral. The differences all favored the CMC program.
That no significant differences occurred either between teachers or across years on the NAT should be interpreted in the light of several other factors. One, the results do not indicate that the SF curriculum outperformed CMC, only that the NAT did not detect a difference between the groups, despite the differences found in the curriculum-based measures. Two, performance on published norm-referenced tests such as the NAT are more highly correlated to reading comprehension scores than with computation scores (Carver, 1974; Tindal & Marston, 1990). Three, the NAT concepts and problem solving items were not well-aligned with either curriculum. The types of problems on the NAT were complex, unique, non-algorithmic problems for which neither program could provide instruction. Performance on such problems has less to do with instruction than with raw ability. Four, significant differences on the calculation subtest of the NAT favored the CMC program during year 1 (see Snider and Crawford, 1996 for a detailed discussion of those results). Because less instructional time is devoted to computation skills after 4th grade, the strong calculation skills displayed by the CMC group would seem to be a worthy outcome. Five, although the NAT showed no differences in problem solving skills between curriculum groups or between program years, another source of data suggests otherwise. During year 1, on the eight word problems on the curriculum-based test, the CMC group outscored the SF group with an overall mean of 56% correct compared to 32%. An analysis of variance found this difference to be significant...
And, here's the kicker. The high-performing kids liked the highly-structured Direct Instruction program better than the loosey goosey constructivist curriculum:
Both teachers reported anecdotally that the high-performing students seemed to respond most positively to the CMC curricula. One of Teacher A's highest performing students, when asked about the program, wrote, "I wish we'd have math books like this every year.... it's easier to learn in this book because they have that part of a page that explains and that's easier than just having to pick up on whatever."
It may be somewhat counter-intuitive that an explicit, structured program would be well received by more able students. We often assume that more capable students benefit most from a less structured approach that gives them the freedom to discover and explore, whereas more didactic approaches ought to be reserved for low-performing students. It could be that high-performing students do well and respond well to highly-structured approaches when they are sufficiently challenging. These reports are interesting enough to bear further investigation after collection of objective data.