Inferring and Explaining

102 InferrIng and exPlaInIng t 1 . It was a fuke that the 928 articles showed no skepticism about the consensus view; the ISI database contained many articles that were skeptical. t 2 . Although the sample told us something signifcant about the ISI database, it was a coincidence that the articles they included showed no skepticism when in fact many peer-reviewed articles not included show plenty of skepticism. I have already conceded that both of these rivals are logically possible. I want to insist, how- ever, that they are very improbable. Remember Giere’s “ruleof thumb”?He tellsus that for random samples, themarginof error is a direct functionof the size of the sample. Samples of fve hundred are accurate to about ±5 percent, and samples of two thousand are accurate to about ±2 percent. Tat means that Professor Oreskes’s sample has an accuracy of, conservatively, ±4 percent. For a statistician adopting a 95 percent confdence level, there is only a 5 percent chance that the population falls outside of the ±4 percent margin of error. Could it happen?Yes. Is it likely at all?No. Much more interesting rivals will have to do with the problem of bias, either intentional or, more likely, unintentional. I suspect that some of you have already wondered if there might be a bias in the ISI database. Maybe they only list “green” articles. Again, the following rival explanation is possible: t 3 . The ISI database is biased in favor of the consensus view. A very diferent sort of bias is possible because of Oreskes’s methodology. It is highly unlikely that most of the articles in the sample came right out and said where they stood on the consensus view. Indeed, she tells us that some of the endorsement was implicit. Tat must mean that her team had to “code,” or oth- erwise interpret, that article’s intention and subsequent endorsement or nonendorsement. Perhaps her teamwas so unconsciously wedded to the consensus view that they misinterpreted many of the articles as endorsing or taking no stand when in fact the authors of those articles intended a rejection of the consensus view.Tus another possible rival explanation focuses on the coding of the articles: t 4 . Oreskes, because of her biases, misinter- preted many of the articles as favorable or neutral when in fact the authors were argu- ing against the consensus view. A fnal rival explanation centers on the pos- sible bias of the entire scientifc community. One might argue, as some have in defense of “creation science,” that there is a kind of pro- fessional conspiracy that efectively censors articles that challenge the consensus view (not just of climate change but of any accepted sci- entifc theory) from being published in peer- reviewed journals in the frst place. Here, the rival does not really challenge the population of peer-reviewed publications, but rather the implied attitude of endorsement by working scientists. t 5 . Respectable scientists arguing against the consensus view cannot get their articles published in peer-reviewed journals.

RkJQdWJsaXNoZXIy NTc4NTAz