The article opens with a scene of an actual human being tutoring a fellow species member. While her tutee works on a problem (calculating average driving speed), the tutor provides lots of interactive feedback. Neil Heffernan, the tutor's fiance, catalogued the various different types of feedback she gave under such categories as “remind the student of steps they have already completed,” “encourage the student to generalize,” “challenge a correct answer if the tutor suspects guessing”). According the the article, Heffernan then "incorporated many of these tactics into a computerized tutor," which he spent nearly two decades refining. Now called ASSISTments, it is used by by more than 100,000 students "in schools all over the country." The article describes the experience of one of these 100,000 students with the program's interactive feedback:
Tyler breezed through the first part of his homework, but 10 questions in he hit a rough patch. “Write the equation in function form: 3x-y=5,” read the problem on the screen. Tyler worked the problem out in pencil first and then typed “5-3x” into the box. The response was instantaneous: “Sorry, wrong answer.” Tyler’s shoulders slumped. He tried again, his pencil scratching the paper. Another answer — “5/3x” — yielded another error message, but a third try, with “3x-5,” worked better. “Correct!” the computer proclaimed.In other words, it's the same old binary right-or-wrong feedback that nearly every educational software program has been using for decades. As the article notes:
In contrast to a human tutor, who has a nearly infinite number of potential responses to a student’s difficulties, the program is equipped with only a few. If a solution to a problem is typed incorrectly — say, with an extra space — the computer stubbornly returns the “Sorry, incorrect answer” message, though a human would recognize the answer as right.True, the program is still a work in progress. But what's being refined, according to the article, isn't the feedback. Rather, it's the program's ability to detect when a student is getting bored, frustrated, or confused (via facial expression reading software, speed and accuracy of responses, and special chairs with posture sensors "to tell whether students are leaning forward with interest or lolling back in boredom."):
Once the student’s feelings are identified, the thinking goes, the computerized tutor could adjust accordingly — giving the bored student more challenging questions or reviewing fundamentals with the student who is confused.Or "flashing messages of encouragement... or... calling up motivational videos recorded by the students’ teachers."
Also being refined is the "hint" feature, which users click on when stumped. Human beings (particularly teachers) track common wrong answers and have other human beings (particularly students) come up with helpful hints. These hints are then incorporated into the next generation of ASSISTments.
Cognitive Tutor, a more established software program that is "used by 600,000 students in 3,000 school districts around the country," also limits its feedback to hints and right-or-wrong responses. And it, too, is being refined based on data from human users:
Every keystroke a student makes — every hesitation, every hint requested, every wrong answer — can be analyzed for clues to how the mind learns.Ultimately, this data will be put to use not to refine feedback on particular student responses, but to help decide how to space out material and schedule periodic reviews.
But it's carefully tailored feedback on particular responses by particular students that makes human tutoring--the inspiration for all these programs--as powerful is it is.
In my earlier post on Cognitive Tutor, I wrote that programming sufficiently perspicuous feedback for mathematical problems "strikes me as even more prohibitive" than the feedback I labored for years to provide in my GrammarTrainer program. Last night I ran this impression past a mathematician friend of mine who cares a lot about effective math instruction. She emphatically concurs.
When it comes to educational software developers--as opposed to educational software users--there is some somewhat perspicuous feedback on whether their answers (answers to students' educational needs) are on track. As I write earlier, that feedback isn't particularly encouraging.
(Cross-posted at Out In Left Field).
8 comments:
"If a solution to a problem is typed incorrectly — say, with an extra space..."
*THIS* is easily fixable. If the program doesn't know how to handle white space, the folks in charge or the programmers need to be hit with a clue-by-four.
Lots of other fairly basic things (e.g. scoring 6/20 as wrong instead of correct, but in need of reduction) also have no excuse.
What is tougher is knowing what feedback to provide when the kids make various flavors of mistakes.
So "5-3x" is not the same as "3x-5", so the answer *IS* wrong, but the student needs more than just "this is wrong." What the correct feedback is for this sort of mistake is going to take a lot of work.
-Mark Roulo
I have not been able to bring myself to read the article yet...
Love this:
"The article opens with a scene of an actual human being tutoring a fellow species member."
I have to post Kent Johnson's comment about educational software. Kent said they spent millions on Heasprout. He's quite skeptical that "Common Core" curricula can be put into software programs.
f(y) = (y + 5)/3 ;)
Hi Catherine,
This is Annie Murphy Paul, the author of the article on computerized tutors in the New York Times Magazine. I appreciate your thoughtful comments on the piece, though I'm sorry that you found it to be a "breathless account of the wonders of computerized learning." Actually I am quite skeptical of the indiscriminate enthusiasm that often surrounds educational technology, and tried to convey that in the article. For example, in this paragraph from the piece: "For all his ambition, [Neil Heffernan, the creator of the computerized tutoring system that is the focus of the article] acknowledges that this technology has limits. He has a motto: “Let computers do what computers are good at, and people do what people are good at.” Computers excel in following a precise plan of instruction. A computer never gets impatient or annoyed. But it never gets excited or enthusiastic either. Nor can a computer guide a student through an open-ended exploration of literature or history. It’s no accident that ASSISTments and other computerized tutoring systems have focused primarily on math, a subject suited to computers’ binary language. While a computer can emulate, and in some ways exceed, the abilities of a human teacher, it will not replace her. Rather, it’s the emerging hybrid of human and computer instruction — not either one alone — that may well transform education."
I respectfully suggest that my article was a more balanced account of computerized instruction than the way you've characterized it here.
All best,
Annie
Dear Annie,
Thanks for writing. You're right that your article was much more balanced than my words "breathless account" suggest. It would have been more accurate of me to describe the headline ("The Machines are Taking Over: advances in computerized tutoring are testing the faith that human contact makes for better learning") as breathless, rather than the article itself.
I appreciate that your article focused on many different aspects of online learning. My focus here is specifically on the issue of feedback, which I see as both one of the weakest features of computerized instruction (even for math!!), and one of the most important aspects of instruction in general.
Hi Annie!
Thanks for commenting -- !
(I STILL have not read your article - I'm terribly behind.)
Speaking of technology, I spent the better part of an hour the other night trying to post links to all of Grace's X-1-2-3 posts on Blackboard.
Two of the links simply can't be posted. Blackboard inserts junk code & there's no html window, so I can't edit it out.
Post a Comment