By John Timmer | Published: March 23, 2006 – 10:57AM CT
The results of the evaluation were very positive from a Wikipedia perspective:
However, an expert-led investigation carried out by Nature — the first to use peer review to compare Wikipedia and Britannica’s coverage of science — suggests that such high-profile examples are the exception rather than the rule. The exercise revealed numerous errors in both encyclopedias, but among 42 entries tested, the difference in accuracy was not particularly great: the average science entry in Wikipedia contained around four inaccuracies; Britannica’s, about three.
Based on Nature’s description (in Word format) of how the evaluation was performed, everything seems largely above board. Potential bias may creep in from Nature’s news staff considering the reviews from the perspective of a “typical encyclopedia user,” but the evaluations themselves are included to allow readers to get a sense of how big a problem that is. All appears to be reasonable.
But the Encyclopedia Britannica went through those evaluations and, based on their analysis, is now suggesting that such appearances are deceiving. And they make that suggestion in language that, for the generally sedate publishing world, is rather sharp:
Nature’s research was invalid. As we demonstrate below, almost everything about the journal’s investigation, from the criteria for identifying inaccuracies to the discrepancy between the article text and its headline, was wrong and misleading. Dozens of inaccuracies attributed to the Britannica were not inaccuracies at all, and a number of the articles Nature examined were not even in the Encyclopedia Britannica. The study was so poorly carried out and its findings so error-laden that it was completely without merit.
Accusations include Nature having used articles from publications other than the encyclopedia and, in one case, material that wasn’t even produced by Britannica. Nature claimed that they matched the size of entries by deleting references only, but Britannica’s response indicated that many submissions were either fragmentary excerpts or subjected to extensive editing. Nature’s chosen evaluators also get criticized for getting facts wrong in their evaluations, and being unable to recognize simplifications that are reasonable for a publication targeted at a general audience.
Britannica is basing all of this criticism on the excerpts from the reviews provided in the Word document linked above, but are calling for Nature to release all the material involved in the article for public evaluation. This may help clarify some issues, but in the end, much of the finger pointing comes down to a “he said/she said” matter of how to interpret the seriousness of a given error. There is really no objective way to determine whether an editorial decision represents an appropriate simplification or a glaring omission. Nature’s response, however, has the potential to be interesting for reasons that go well beyond this controversy. In a very real sense, they and Britannica are kindred publications, facing increased pressure to maintain subscription-based publication in the face of open-access journals. How hard they press an issue that is seen as an embarrassment for their potential allies may be very telling.