Since I don't know statistics, all I can do with this Andrew Gelman post is surmise that he's suggesting that our family motto (number 2)* - it's always worse than you think - may be correct.
Robin Hanson asks the following question here:
How does the distribution of truth compare to the distribution of opinion? That is, consider some spectrum of possible answers, like the point difference in a game, or the sea level rise in the next century. On each such spectrum we could get a distribution of (point-estimate) opinions, and in the end a truth. So in each such case we could ask for truth's opinion-rank: what fraction of opinions were less than the truth? For example, if 30% of estimates were below the truth (and 70% above), the opinion-rank of truth was 30%.If we look at lots of cases in some topic area, we should be able to collect a distribution for truth's opinion-rank, and so answer the interesting question: in this topic area, does the truth tend to be in the middle or the tails of the opinion distribution? That is, if truth usually has an opinion rank between 40% and 60%, then in a sense the middle conformist people are usually right. But if the opinion-rank of truth is usually below 10% or above 90%, then in a sense the extremists are usually right.
My response:
1. As Robin notes, this is ultimately an empirical question which could be answered by collecting a lot of data on forecasts/estimates and true values.
2. However, there is a simple theoretical argument that suggests that truth will be, generally, more extreme than point estimates, that the opinion-rank (as defined above) will have a distribution that is more concentrated at the extremes as compared to a uniform distribution.
I could be wrong.
I probably am.
[pause]
OK, I've just read the Overcoming Bias post to which Gelman is responding.
Where public education is concerned I have the answer.
The truth is always in the long tail.
Always.
_______________
* family motto number 1 being no common sense-y
2 comments:
Bayes Theory brought statistics into its own by allowing us to calculate conditional probabilities. Students have a hard time with BT, because so often, it contradicts what we think should be. To introduce BT, we use the standard epidemiology model, that we have a disease that infects 0.1% of the population. An infected patient tests positive 99% of the time, and an uninfected patient tests negative 95% of the time. What is the probability that a test will return a false positive?
Logic would tell us the answer would be very low, when in fact, the test will return a false positive 98.1% of the time.
Bayes is applied in all kinds of fields, including artificial intelligence.
To introduce BT, we use the standard epidemiology model, that we have a disease that infects 0.1% of the population. An infected patient tests positive 99% of the time, and an uninfected patient tests negative 95% of the time. What is the probability that a test will return a false positive?
yup
That moment was a big one for me.
Post a Comment