I've said in the past that many teachers think that the problem of education is defined by what walks into their classroom. As kids get older, it's very easy to blame the kids and external causes.
"As Engelmann suggests, let's save the excuse making until we clean up our instructional act."
Unfortunately, this is done by trying to bring more kids up to very low cut-off levels. I'll call this the guess-and-check approach to educational improvement.
This reminds me of students trying to fix a computer program that has many different internal errors. The errors interact and produce all sorts of odd results. There are no clear cause and effect relationships. Invariably, students try to fix the program by changing something and looking at the results. The program might be fixed in one area, but the fix might cause a problem in another location. This happens because they don't take the time to really understand what is going on in the code line by line. Things change, but they don't really know why.
A lot of research abhors individual anecdotes. It's almost a dirty word. However, by performing a detailed analysis of individual cases, one really understands what's going on. You see a direct connection between cause and effect. Fixing this problem won't fix all of your problems, but it is a necessary step in the process.
Statistics hides problems. It's a process of reducing large amounts of data (many errors) into a more manageable amount. In doing so, information (problems) can be lost or confused. You might think you know what's going on, but you don't. If you want to understand why some kids are successful and some are not, you have to analyze a lot of individual cases. You aren't looking for one error and one solution. You're looking for many.
Why do so many educators try to find the "one thing", like better teacher preparation, that will solve the problem?
Guess and check.
..............................
Statistics hides problems.
Ed calls this death by data.
I have a question about the analogy---
ReplyDeleteIs there an equivalent to a "case study" in trying to fix software?
(Is that question too difficult?)
Too difficult meaning too detailed
ReplyDelete"Is there an equivalent to a "case study" in trying to fix software?"
ReplyDeleteIt's late. I'll try to comment tomorrow. It's all case study, in effect. You have to pick off the errors one by one. No guess and check allowed.
right - I'm confused between "guess and check" and "picking off the errors one by one."
ReplyDeleteEither way (given that I don't know anything about writing software) it seems to me you'd get the problem of a fix for one problem leading to other new problems.
btw, this happens with writing.
What no one ever quite realizes & doesn't exactly teach is that writing isn't words, exactly; it's structure.
William Goldman's famous line about screenplays sums it up: "Structure, structure, structure."
I'm in a constant war with structure...and when I move one line or paragraph I've just created 5 new problems.
This is why I enjoy writing short posts for a blog, by the way.
This is also why I feel nothing but scorn and contempt, not to put too fine a point on it, for PowerPoint assignments in lieu of CAREFULLY TAUGHT writing assignments.
ReplyDeleteWriting short posts, which would include writing one slide's-worth of bullet points, is the easy part.
There are two parts to creating working computer programs (complex systems). First, you have to try and get it right the first time, and second, you have to know how to fix errors. If you don't do enough analysis (thinking and planning) beforehand, the errors will go up and the time required to fix the problems later will go up exponentially. You might even discover (way too late) that the original design has a major flaw.
ReplyDeleteThere is, however, a point of diminishing returns for the analysis and design stage because many times you just don't know enough until after you do the project. Generally speaking, if you've done a particular type of project before, then more prior analysis and design (a top-down approach) is useful. If it's a new type of project, with lots of unknowns and risk, then a jump-right-in (bottom-up) prototype approach works best. I'm a big fan of the prototype approach because most projects have lots of unknowns.
With a prototype approach, you start with a very small, but correct system. Each new modification is a small change, and the number of potential errors (and time to fix) are minimized. You constantly maintain a correct system as you add in more functions. You have to do some prior analysis to know where you are going, but for many projects, you have to learn a lot along the way. The downside to a prototype approach is that you might run into a roadblock that requires a major restructure of the system. From my experience, however, there is always some big surprise even if you spend a long time on the analysis and design phases. (I actually like to use something I call an Outside-In approach.)
This all falls into the area of systems analysis. It's difficult to really talk about fixing errors unless you say something about this topic. I've taught courses in systems analysis, but I'm not a fan of overly-rigid software development schemes. CASE (Computer-Aided Software Engineering) was really big for a while, but as in education, pedagogy will NEVER trump basic skills, content knowledge, flexibility, and hard work. For large, complex projects, give me just a few really good programmers who know what they are doing.
However, given that you have an existing complex system, how do you fix errors when there are a lot of them? One technique is divide-and-conquer. You divide a large, complex system into separate pieces with well-defined and controlled connections. You test and fix the pieces separately and then join them back together. For software, this is ofen called unit testing and integration testing.
For education, one could divide it into grades K-5, 6-8, high school, and college. This kind of analysis might tell you that many problems in high school are really caused in the lower grades. Duh! But how many times have educators talked about fixing high schools without saying one word about the lower grades?
Our public school used to use CMP for middle school math even though there was an obvious content and skills gap going to the high school honors track. This is really obvious, but middle schools need to examine very carefully how the outputs of their system (students) fit into the prerequisites of the next system (high school). High schools need to do the same with colleges and vocational schools.
For specific errors in a system, guess and check will not fix them. I'll give you an example. A teacher and parent committee at our school analyzes state testing results to find solutions. They see a problem (like a lower number, compared to last year, in the math problem-solving section of the test) and they talk about how to fix it. They talk and talk and talk, but they really don't know what the error is, or even if it is an error. If there is one thing I've learned over 35 years of programming, it's that you have to carefully define the error and work backwards to see where it came from. What is the formula? What other numbers are used to calculate the suspect number? Where did those numbers come from?
You have to work backwards to the actual test and the actual test questions for each individual. Follow the bad number backwards to find the source. I've tried to do this with our state test results and it can't be done. There are gaps. When I do get back to the exam, I see something like this question (sixth grade!):
Mr. Mason owes $21.28 for his groceries. He pays with a twenty-dollar bill and a five dollar
bill. What is the correct amount of
change Mr. Mason will receive?
A. $3.72
B. $3.82
C. $4.28
D. $4.72
It's not clear how they filter out "problem solving" skills from this problem. but they do. But this is not enough. You have to see the test numbers for individual students. You have to look at how the formulas worked for each student. You can't fix problems by looking at a statistically reduced-down number that just doesn't look right.
Individual "case studies" will give you a good idea of why bad numbers happen, but there is another issue. Assumptions. Analysis of the details (formulas in this case) can help you calibrate how bad the or good the numbers are. For example, a school committee might trumpet that numbers are going up, but the numbers don't mean what many people think. Our schools have Proficiency Index Scores of 92% in ELA and 88% in math. They are increasing slightly. Sounds good until you work backwards and look at the equations (if you can find them). These numbers really talk only about what percentage of kids (in our affluent town) get over a low cut-off on state tests with questions like the one above. The scores give our school a "High Performing and Commended" rating, but most parents don't ever track backwards to see that this isn't saying much. Parents think that their kids are being properly prepared for college, but it says no such thing.
You have to look at the details to fix errors. You have to work backwards from the bad number. Statistics might give you a warning, but that's just the start. The goal is to fix errors, not get better statistics. The statistical number might indicate a problem, but the goal isn't to improve that number. This is a variation of what I've been pushing all along; individuals matter, not statistics. I've said this elsewhere: A slowly rising tide will float many boats, but no one will ever learn to fly. (unless you are affluent enough to get lessons)
Of course, even if you don't see a bad number, problems will exist. A more rigorous approach to software testing is to study the code line by line. Analyze each if-then-else branch and and design a test. This is called complete path testing, and it's often a basis for putting together a detailed test plan.
You know, education would probably be a whole lot better if educators didn't take it so personally.