There are two kinds of data loops in play. The first, I'll call loop 1. The second I'll call loop 2.
Loop 1 is the "We've measured your school's performance and found it lacking" loop. This is aggregated data that has been massaged to produce your AYP (measure of Adequate Yearly Progress, Mass. MCAS). From this measure, schools are to produce a School Improvement Plan (SIP) which is basically goal setting sans concomitant resources to actually effect a change. Since the SIP is a dead end, that particular loop looks more like a croquet wicket. It's not a loop at all.
Loop 2 is the "Your last year's students failed MCAS" loop which is given to teachers about 5 months after the students in question have left your embrace. Usually this is aggregated also, at least in my school it was handed to us on a printout. I had granularity down to the question but not down to the student.
Somewhere in this measurement system is precise, standard by standard knowledge of each student's current (actually 5 month old) ability. We don't get that.
The real problem is that loop 1 should be used for "oh my, I think we need a remediation here" and it is used for "oh my, time to reset the goal posts." If loop 1 is not used to drive some kind of structural change, then anything you learn (and you can't learn much) in loop 2 becomes a 'nice to know' kind of thing but it doesn't fix Johnny's inability to add.
Data not acted upon is noise and old data is rancid.
Call me crazy, but I don't see the problem here.
This school should just tell all its struggling students to Seek Extra Help.
Then, when the struggling students don't Seek Extra Help, or do Seek Extra Help but Extra Help doesn't Help, they should tell the parents, "If your child doesn't come in for Extra Help, there's nothing I can do."*
That's what my school does.
It works, too.
* direct quote
more fun with numbers
data-driven instruction redux
data-driven loops & noise
20 comments:
Fresh Lot Yield
I worked for a time as a test engineer in a computer company. This was early in the life of the computer business, when product volumes were beginning to take off. In those days products were built and tested in seperate departments. Testing was often done by very expensive, specialized, custom designed equipment. One of the crucial measures of success was something called fresh lot yield. This was simply the percentage of product that passed on its first trip through the test area.
It was an important measure because it was an indicator of the fidelity of the production line but it also drove the cost of the test area. Low yields meant repeat testing, more test equipment, more test employees, and slower time to get product out the door. As is often the case when in the trees, it was hard for our management to acknowledge the forest. They were always on our case (in the test department) to raise fresh lot yield. Never mind the hard reality of the probabilities that preceeded our testing.
At the time, our testing cost far more than the preceeding production so we were the more visible target, I suppose, for cost driven managers. Anyway, after enough of this misdirected finger pointing, the test engineers (and the testing) migrated into production and in the process the technology evolved from testing to defect prevention. In some cases we were able to eliminate conventional testing altogether through aggressive and innovative defect prevention. This rickity process, as silly and obvious as it might seem now, took years to fully accomplish.
In some ways, I think standardized education testing is in a state that is very much like those early days of the computer industry. We test (mostly) after ‘production’ and when the fresh lot yield is low we beat on the ‘test department’ to increase yields. In this case ‘production’ is all those years of teaching that preceeded grade x and the ‘test department’ is the grade x teacher. Now the analogy is a bit weak since the grade x teacher is both the grade x producer and the grade x tester but if you stretch for a moment, it’s not hard to concieve of the preceeding x-1 years as a production line that is purportedly delivering you raw material ready for grade x curriculum.
In my experience, the raw material produced in the x-1 production department is often not up to standards when it hits grade x. Furthermore, I believe that the structure of public education is a significant impediment to any attempt to distribute ‘testing’ such that it evolves into defect prevention as happened in the computer industry. I’ve probably exhausted the analogy at this point so I’ll give an example.
One of the common misconceptions you’ll find around fifth or sixth grade is that you can’t divide 2 by 3. This is probably due to 3 or 4 years of teachers telling kids to “put the big number in the box.” If you’re a fifth grade math teacher you have to unteach this misconception and it doesn’t die easily. Even when kids buy that you can do it they’ll have a persistent habit of putting the big number in the box (always).
Here’s the impediment! When I catch this, what can I do about it? It’s not easy to interact vertically. There’s often no time for it. There is no formal mechanism to drive this misconception out of the lexicon. It’s most likely buried in texts, worksheets, internet practice problems, teacher instructions and curriculum design. There is no ‘test department’ to take on the responsibility, to really drive a stake throught the beast. And oh by the way, once detected in my school, how do I spread the fix to the rest of my district, state, or country. My suspicion, this goes on year after year in classrooms all over the country.
We are designed as very flat organizations with a linear business model. If you want to double production, double the input. Flat organizations are extremely inflexible (everybody owns it so no one owns it). Did you ever think of how many teachers get up every day and build exactly the same lesson plan as you are doing right now? It is an extremely ineffecient structure that has no workable mechanisms for ongoing improvement. Many dinosauer companies found themselves trapped in similar models in the eighties. Without the monopoly protections that we have, they found themselves extinct by the nineties.
My computer company? Cratered in 1998! We were slow learners but we had a lot of inertia so we lasted longer than we should have.
Probably wasn't clear...
Johnny can't add is a proxy for an entire class that really can't add. Sure you can help the individuals that will show up for help. But the tragedy here is that often these little data points point to systemic problems, ones that never get fixed. So the teacher can use the data for a 'point' solution but the 'system' ignores the signal.
And oh for a point of reference, my kids have parents that send their kids to school in January in a blizzard wearing flip flops. After school help isn't high on the T2D list :>{
in the process the technology evolved from testing to defect prevention. In some cases we were able to eliminate conventional testing altogether through aggressive and innovative defect prevention. This rickity process, as silly and obvious as it might seem now
NO!
It's not obvious at all!
I have no idea how this works - how you went from testing to defect prevention-----???
I assume I have some idea of how this would happen in schools, but otoh I'm thinking of Engelmann & DI, which relies on extensive field testing....
I'm not sure whether the DI folks think of themselves as practicing defect prevention in a formal way.
my kids have parents that send their kids to school in January in a blizzard wearing flip flops. After school help isn't high on the T2D list :>{
Don't I know it!
The ONLY reason an entire district can premise its instructional practice on students Seeking Extra Help is that you've got a district filled with affluent, well-educated parents who aren't sufficiently well-versed in sound educational practice to realize that Extra Help isn't the same thing as Effective Classroom Instruction.
One of the common misconceptions you’ll find around fifth or sixth grade is that you can’t divide 2 by 3. This is probably due to 3 or 4 years of teachers telling kids to “put the big number in the box.” If you’re a fifth grade math teacher you have to unteach this misconception and it doesn’t die easily.
Interesting.
And yes, unteaching takes a LOT longer than teaching right in the first place. (I think Engelmann has stats on how much time it takes to unteach something incorrectly learned -- but I've forgotten what they were.)
Sure you can help the individuals that will show up for help.
NO!
You can't!
You can't do it systematically, at least.
At the parent transition-to-high-school meeting the h.s. principal, who is a very smart guy & has managed to set and maintain a positive atmosphere at the high school for years, spent a great deal of time talking to us about Extra Help and Tutoring.
The pitch for the high school - and this high school is in the Top 100 - is that the kids can have Extra Help all day long, every day, day in and day out. Extra Help is so fundamental to the high school that if a student doesn't Seek Extra Help, the principal said, his teachers will conclude he doesn't care.
Also, the high school provides a whole phalanx of inexpensive tutors in the form of other high school students (not your kid) who, apparently, do not need to Seek Extra Help. These tutors, the guidance counselor told us, charge much less than the going rate.
Extra Help and Tutoring: this was the pitch for a public high school in the Top 100.
Here's how you make the transition from testing to prevention...
Let's say you are testing away and you find out your biggest problem is some component that's being automatically inserted into a circuit upside down (occasionally). You go find out where that is happening and discover that there is nothing to prevent this from happening short of attentiveness on the part of a busy machine operator.
You design something that either prevents this from happening or detects it before the thing is soldered in place. When it's done right you'll never see that error again in the test department.
Then you look for your new biggest problem and do it all over again. This is of course overly simplified but you get the idea. It is a relentless, tedious, lust after preventable problems. You're looking for defects that occur in parts per million operations.
When it comes to DI there are similarities. DI's insistance on mastery at measurable intersections is really not prevention at the intersection but it is prevention for the subsequent step in the process, i.e. it prevents defective product from moving on down the line.
Eventually, if you are relentless and attentive, the test department goes out of business for lack of anything detectable to complain about. Your fresh lot yield is 100% or at least high enough that testing provides no value add. Pffffft it's gone!
IMO DI's best attribute is the attention to continuous 'sampling' and intrinsic prevention of defect passing. I wonder sometimes if this isn't its real strength and would work for any pedagogy.
You design something that either prevents this from happening or detects it before the thing is soldered in place. When it's done right you'll never see that error again in the test department.
Then you look for your new biggest problem and do it all over again. This is of course overly simplified but you get the idea. It is a relentless, tedious, lust after preventable problems. You're looking for defects that occur in parts per million operations.
Oh!
I see.
It's not exactly the "opposite" of testing, right?
It's...."incremental" testing?
Testing as you go?
[DI is] it is prevention for the subsequent step in the process
Absolutely!
Eventually, if you are relentless and attentive, the test department goes out of business for lack of anything detectable to complain about.
So are you saying that "testing" as a separate organizational structure disappears (as opposed to "testing" as a normal, integrated part of design...)
hmm....It's not testing, exactly; it's "troubleshooting" ---- ??
(Sorry to be thick!)
DI's best attribute is the attention to continuous 'sampling' and intrinsic prevention of defect passing. I wonder sometimes if this isn't its real strength and would work for any pedagogy.
I would say 'no' in that DI vastly increases the number of answers kids can give in a class through the use of call and response. (I don't think they use the phrase "call and response.")
Also, they spend YEARS figuring out exactly where the "joints" in the material are -- which is no small task. (Having two autistic kids, I'm supposed to be doing on-the-flay "task analysis" all the time. VERY hard.)
What I wonder is whether the continuous sampling and intrinsic prevention of defect passing would tend to turn other pedagogies into DI (or precision teaching)....
Here are some more thoughts on testing, which are meant to dovetail with the prior Anon's. They are perhaps a different way of saying the same thing, so maybe it'll provide a another insight.
In the software world, Testing, or Quality Assurance, or Quality Control, is often thought of just as the Anon said--it's the After Production step. In far too many software companies, there are entire QA teams dedicated to "Testing" the code AFTER it's been written, after it's been implemented. They are to find the "bugs".
Competent software companies recognize that having a separate QA department IS THE WRONG solution. The right solution is to ENGINEER the QA in from the beginning--having a separate QA department may be a temporarily necessarily evil, but fundamentally, the engineering staff should make their quality control part of their design process.
This is another example of trying to move to prevention. Just as in hardware, you try to design a system so that no part can be put in upside down, in software, you move to prevention by including as an input into your design the means necessary for finding defects before you've coded them.
One big step in moving to prevention means making a decision when you START your design, that one of the inputs to your design will be "be able to test for defects."
Here's an example: you need a piece of software to analyze images of luggage and detect contraband in luggage. Before you start writing code, you ask yourself "how will I know if my system is working?" You start writing down the tests you'd need to make. Then you engineer the project from the beginning to make it EASY to perform those tests: you design the data sets to be easily computable or sortable according to your needs, you design the software logic to work on "fake" data that you know the ground truth of (and you see how well it works against that), etc. You do this right down to the nuts and bolts, even in software. Just as you can make a design that prevents someone from inserting a part backward, you can make a software design that prevents someone from reversing the input parameters (so they can't mistakenly put in y,x when they meant x,y.)
At the nuts and bolts level, good sw engineers don't write a function without first writing down the test that the function is working properly. So if you're writing code to compute f(x) = sin(x), the sin of a number, you first write a test that f(x) only produces values over the range -1 to 1. You write a test that on input x = 0, f(x) was really 0. If you write enough of those tests, then you've made sure you've got the equivalent test of "the piece wasn't in upside down"--and then, AFTER all of those tests are written, then you actually write the function--and IMMEDIATELY test it against the tests.
Anon noted how DI does testing, but didn't talk much about prevention in DI. But
DI does understand the prevention of errors. nearly ALL of Engelmann's actual scripts have thousands of critical error prevention details in them. When his theory of instruction says you must make examples so that PRECISELY ONE concept is taught in that set of examples, he's already handpicked every example so that you never show students two red curvy shapes--he's made sure to show the smallest counterexample necessary, etc. All of that is error prevention. If you dont' follow the script perfectly, you make problems for your kids, because you've deviated and now the error prevention isn't there.
I think an important lesson here is that, in teaching, EVERY SENTENCE said to a child is a liability, not an asset.
It's a risk. It's an opportunity for failure. textbooks, lecture notes, lesson plans are all necessary, but make no mistake, every sentence is a risk, int that every sentence is an opportunity to mislead and misteach a student.
If we stopped thinking of our teaching of subject material as a huge cache of our ASSETS that we are giving to or passing on to others, and started recognizing it as a LIABILITY, where each sentence could undermine the clarity in our students' minds, I think we'd be better off. Instead we're so sure that we're imparting KNOWLEDGE to them, that we are handing them some asset, just like handing them a brick of gold, that we don't take care to see the risks.
Awesome insights! I remember an Air Force methodology for specifying software systems. In big (monstrous) projects with development spread out over big geographies and many contractors it was very effective.
What they (the designers) did is write a small functional specification then they wrote a piece of code that was going to be used to test what was to be developed by contractors. The contractor got the spec and the test code, nothing else.
When the contractor delivered, their stuff always works. If it doesn't work the test was faulty.
I want to start my next class by writing the test code before it starts. Each standard turns into several very focused questions. These would let you have a tool set to use all year for troubleshooting.
No Defect Left Untouched.
Competent software companies recognize that having a separate QA department IS THE WRONG solution. The right solution is to ENGINEER the QA in from the beginning--having a separate QA department may be a temporarily necessarily evil, but fundamentally, the engineering staff should make their quality control part of their design process.
Great!
Thanks -- that's what I was thinking.
When I said that DI does assessment "on the fly," this is what I was thinking of. I'm not as well-versed in DI as I should be, but it's seems clear to me that DI embeds extremely simple, quick assessments within classroom pratice. DI assessments aren't add-ons or extras or post hocs.
(I hope palisadesk will correct what I've got wrong.)
I remember reading Engelmann once saying it's a simple matter to find out how well kids are doing in a school; you should be able to enter a classroom, give kids a 5-minute assessment, and see whether they know how to do the arithmetic operations, say.
(Again, if palisadesk is around she can vet this comment!)
ASSETS that we are giving to or passing on to others, and started recognizing it as a LIABILITY, where each sentence could undermine the clarity in our students' minds, I think we'd be better off
I think Susan S would probably sign off on this. As I recall, when she first began using Saxon with her LD son, she ad libbed a fair amount of material -- and then, in time, began to see that he did better when she stuck to the text as written.
I've learned a similar lesson over time, though I'm not sure I have a good means of testing a sentence I'm about to utter before I say it.
Nevertheless, I've seen repeatedly that it's very easy to confuse the issue when you're trying to explain or demonstrate something one-on-one.
on-the-flay = on the fly
I think one of the major challenges to public education is structural obsolesence (at least in math). By that I mean there is no way to encapsulate that perfect lesson for all time (speaking figuratively not literally). Perfect lessons are reinvented every day and every year.
We have a cacophony of competing publishers, pedagogy, curricula, and lesson plans all scurrying around a topic that really hasn't changed in 100 years. Well, maybe logs are new???
In industry you put the little defects to bed. Once! In education we seem to reinvent the wheel every 5 years or so and each iteration is a precise copy of the last.
In education we seem to reinvent the wheel every 5 years or so and each iteration is a precise copy of the last.
loll
But are they precise?
Stone has a great paper tracing the many reinventions....
First, it's TOTALLY false that in industry, you put the little defects to bed ONCE. You only think this because you didn't work an an industry where you saw the defects, errors, bugs, etc. 99% of the bugs found are found because they were found before, and people knew where to look. People reinvent wheels in every single industry, every day. There are dozens of "systems" to help companies make error free products--Six Sigma, CMMI, "RE engineering", to name a few. All they did was produce yet another system by which to quantify yet another process that's broken, and still haven't figured out how to actually make the monkeys do it right, because the humans don't like being monkeys.
What's better in industry is that eventually (and sometimes this is LONG LONG after they should have), companies fail because of those defects, while schools never do.
But I think the idea that there is a "perfect lesson" is wrong--again, EVERY LESSON is a liability, not an asset. Each time you open your mouth, open the book, draw on the board, you risk losing a student to confusion. What worked for some students WON'T help others. Precision teaching and DI still works better with trained teachers, who can work together to recognize when they've misled, confused, etc. their students.
Post a Comment