To be fair, the Khan Academy (1) hasn't been around that long and (2) is the creation of a former hedge fund manager with degrees in math, computer science, and engineering but not in, say, cognitive science and child development. As such, the Khan Academy hasn't profited from the "over 20 years of research into how students think and learn" that underpins more established educational software programs like Carnegie Mellon's Cognitive Tutor.
So it was a bit disconcerting to find, fast on the heels of the Edweek's Khan Academy article, a front page article in Sunday's New York Times on Cognitive Tutor, and how it, too, has turned out to have no statistically significant impact on test scores. While I'd never had a chance to try it out (unlike the Khan Academy, Cognitive Tutor gates access to demos and charges big bucks instead of nothing at all), I'd heard only good things about it, and J enjoyed soaring through its algebra lessons during middle school. But as soon as I read the Times' description of its pedagogy, its limitations became crystal clear:
When the screen says: “You are saving to buy a bicycle. You have $10, and each day you are able to save $2,” the student must convert the word problem into an algebraic expression. If he is stumped, he can click on the “Hint” button.A math buff would soar right through this; for anyone else, the hints seem way too much of a crutch. There's no mechanism here for ensuring that you're working things out to the best of your ability before resorting to "hint"---i.e., nothing to stop you from clicking "hint" the moment you're not sure what to do. And what if your answer is almost right: say you forgot to include the initial $10, or let x stand for hours rather than days? As far as I can tell (I've now tried it out a bit), you're either right or wrong, and that's it. The program simply isn't sophisticated enough to highlight exactly what needs adjustment. And there's a very simple reason for this. As I discovered in creating a software program that highlights grammatical errors in English phrases and sentences, this kind of perspicuous feedback takes a huge amount of coding (of the sort that you don't find in any other language teaching software program, thank you very much). Programming in the analogous feedback for mathematical expressions and equations strikes me as even more prohibitive.
“Define a variable for the time from now,” the software advises. Still stumped? Click “Next Hint.”
“Use x to represent the time from now.” Aha. The student types “2x+10.”
On closer inspection, therefore, Cognitive Tutor seems inevitably to foster--in all but the brightest, most motivated students (the ones most able to basically teach themselves)--far too passive of a learning environment for lasting learning. Indeed, the only truly active learning environment that I've ever seen in any software program for any academic subject is that which a computer programming language platform provides for--what else?--computer programming. Only here does the feedback--the error messages or the unexpected outputs--precisely reflect what you've done wrong.
Will these recent exposés about the limitations of educational technology for subjects other than computer science have any effect whatsoever on the edtech bandwagon?
We might as well ask whether recent cognitive science findings have had any effect on how schools teach "higher level thinking." Or whether mainstreaming kids on the autistic spectrum has had any effect on mandatory group work and personal reflections. Or whether parental concerns have had any effect on schools choosing Reform Math. Or whether, for that matter, the Pope is Jewish.
(Cross-posted at Out In Left Field).