Pages

Sunday, October 5, 2008

Here we go again

Stuff like this really honks me off, and this is even worse (Darren, you owe me blood pressure medication).
Teachers at Soquel High School have agreed not to wear "Educators for Obama" buttons in the classroom after a parent complained that educators were attempting to politically influence his daughter and other students.
These teachers must not have much to do in the classroom, if they have all this time to waste on topics that have nothing to do with the curriculum. But I promised myself I wouldn't rant, so I won't. Instead, I'll offer an alternative for those who just can't keep from bringing the election into the classroom -- an alternative that does not push a candidate or a party, and actually has something to do with learning the class material -- and critical thinking, in the literal, and not the "think like a slobbering leftist" education school definition. Wow, how about that!

Student interest is a great motivator, particularly when you teach something many students find boring, or even intimidating, like I did. One thing we did that was very successful was build several applications with the tools we were going to cover that grabbed student interest when we said, "At the end of the semester, you'll be able to do this, too."

One of these was a simulation model that based on the scores for all of the games that season predicted the winner of the Superbowl (we had one for the NBA finals and another for the World Series, depending on which semester we were in).

So if you absolutely must address the election in class, here is one way you can do it where the students will actually learn something, and contains not a hint of advocacy or indoctrination.

Have students build an application that predicts the results of the election. Remind them that the more variables they incorporate, the more accurate it will likely be, and encourage them to make it as complex as they like.

You'd want to break them into teams to do this, and give them time to talk about what variables they would want to incorporate, and how. You should probably give them a list of sources for data, like realclearpolitics.com, gallup.com, and rasmussenreports.com. In fact, give them a whole class period to do nothing but plan their model, figure out where they'd get the data, and assign people in the team to do various tasks.

I'd give them a week to turn in the models. After going through them, you can pull several up with different results and as a class, pick apart the applications and discuss why they got different results (this is what is known as a learning experience). You can then, again as a class, discuss which of the models is/are most likely to accurately predict the results, and why. You can even give bonus points to the team whose model most accurately predicts the election.

See? You addressed the election, and you didn't have them sing creepy Hitler Youth songs.

If you think about it, these models incorporate a lot of mathematical knowledge in many different areas, and all through the model. Take collecting the data, say, polls. How are they going to deal with the different levels of statistical error in different polls? How will they deal with different party weights in different polls? What, other than polls, will they use as input variables, and how will they incorporate them into the model? For example, if they're going to look at the number of voters who went for Hillary in the primaries and turn that into support for McCain, how, exactly, are they going to do it? What algorithm will they use, and what will they base it on? And would they also want to use another variable, say, Democrat respondents who only lean Democrat in the election, or are undecided to calculate their Hillary conversion variable?

And what about actual election day statistics, will they use those? If so, which variables? How will they incorporate them?

You can turn just about anything into a real, learning experience in the classroom if you just think about it. Unfortunately, "thinking" seems to be an alien concept to many teachers these days.

The learning isn't only in creating the models. The learning -- and critical thinking -- is also in analyzing the models and comparing them once they've been done. What makes a good model? What makes this model more accurate than that one? Would this be a more accurate model if we tweaked the algorithms, and if so, how would we tweak them? You get the idea.

When my students are working in teams, I usually migrate from team to team, playing devil's advocate, and gently nudging them when they're completely off track (I call this guided constructivism). With a project like this, I would probably limit my input to making sure they understood, and correcting fundamental errors, like only taking into consideration the popular vote. Oh. And I would only do something like this after the students had all of the necessary knowledge and skills to actually build a working application. Sorry, but if you think turning students loose on their own to do complex projects like this is a good way to introduce them to new skills, you have no business within a hundred miles of a classroom.

(We talked about doing this with one of the sports championships, don't remember which now, but decided against it because making the data usable would require complex Excel text functions we had not covered in class.)

Cross-posted at Right Wing Nation

5 comments:

  1. . Remind them that the more variables they incorporate, the more accurate it will likely be, and encourage them to make it as complex as they like.

    I find this very interesting advice. I work as a forecaster, and while adding more variables will make a model more accurate in terms of backcasting, that's very different to improving your accuracy in terms of forecasting. Can you please tell me what is your experimental results behind this statement that more variables tend to improve accuracy? And which sorts of models does this apply to?

    Also, how do you measure accuracy in this area, given that there's only one election/superbowl/etc? What statistical test do you use under this situation?

    ReplyDelete
  2. Sorry, it's not an absolute that more variables will result in greater accuracy, but that's true up to a point (and beyond that would be beyond the scope of the class). In the suberbowl model, for example, you can make a simulation more accurate if you take into account not only the scores of all the season games, but also the injuries late in the season (because those players are unlikely to play in the bowl).

    And you can't talk about which models are more accurate until the election has passed -- but you can talk about which models are more likely to be accurate.

    ReplyDelete
  3. Hmmm, my own personal inclination when teaching how to forecast is to push the idea of simplicity very early on, and get it fixed into my co-worker's heads. While sometimes complexity is justifiable, any additional variable should be treated with suspicison. "Guilty until proved innocent." (I don't apply this attitude to criminal trials of course).

    Does taking the injuries late in the season into account make the simulation game more accurate? Does taking the scores of all the season games make the simulation game more accurate? I think these are empirical questions. And I am well aware that the empirical answers in other areas of forecasting have often turned out to be surprising.

    My other concern is that as far as I know you can't say anything reliable about which models are more accurate even *after* the election has passed - you only have one datapoint, so as far as I know it's not enough to estimate the probability that the models that got the result right did so by pure chance. A simulation model that consisted of tossing a coin would have a 50% chance of getting the right answer for a single election. I am very curious as to how you get around this.

    ReplyDelete
  4. Because the simulation doesn't predict merely a win/loss (that would be 50%), but electoral votes -- just as the superbowl simulation predicts a win/loss in points for both sides.

    The simulation that most accurately predicts the results of the electoral vote is, ipso facto, the most accurate. You could, say, have one that predicted 300 electoral votes for the correct winner, and another that predicted 279. If the winner in the election ends up with 282 votes, then the second would be the more accurate.

    ReplyDelete
  5. You could, say, have one that predicted 300 electoral votes for the correct winner, and another that predicted 279. If the winner in the election ends up with 282 votes, then the second would be the more accurate.

    Same problem applies, you still only have one data point. Substitute my coin toss with a random number generator if you like. Most weeks someone wins the lotto, doesn't mean that that someone has any better a method of picking numbers than the people who don't win. They may do, but as far as I know one data point won't tell you that.

    ReplyDelete