Over at The Collapsed Wavefunction, Chad Jones is talking about a paper in the Journal of Chemical Information and Modeling (admittedly not one that's on my regular reading list...) which includes a straight-faced endorsement of traditional Chinese medicine. His discussion of the paper and issues associated with it is good and definitely worth a read if you're into bad science.
The crux of the problem is one of authority. Peer review is often peddled as a gold standard of authority when debating purveyors of pseudoscience. For example, any debate around creationism (or 'intelligent design') inevitably involves defenders of biology pointing out that few, if any, peer-reviewed papers arguing for design have been published. Conversely, proponents of creationism wave a handful of peer-reviewed papers around as if they refute the much vaster literature of evolutionary biology.
Peer review conveys authority. I suspect that Chad is, in part, frustrated that the hard-earned authority of his field has been lent to something as flaky as TCM. A proponent of TCM could quite plausibly cite this paper as an example of Science™ taking their woo seriously, which not only lends false credibility to TCM, but undermines the authority of the journal that published it.
Reading Chad's post reminded me of the debates about BlogSyn earlier this year. For those who don't know, BlogSyn is an effort by several chemistry bloggers to assess the reproducibility of methods from the chemical (specifically organic synthesis) literature, and achieved notoriety after failing to reproduce some results from the justly-famous Baran lab. No impropriety on the part of Baran et al. was implied by the authors of BlogSyn; the source of the problem was eventually identified, and they all blogged happily ever after.
Following the initial post, discussion at a number of blogs (particularly In the Pipeline) often focused on the perceived lack of authority or legitimacy of the authors of BlogSyn. A relatively common point was that work of this nature ought to be peer-reviewed to ensure the competence and identity of the critics. The fact that SeeArrOh and colleagues are pseudonymous was a point of much contention; Rich Apodaca offered a nice discussion of this. One counter-argument is that they are young scientists early in their careers, and criticising the work of senior researchers could harm their prospects.
In principle, peer review ought to solve some of these problems. In peer review, critics are anonymous and fear no reprisals from those being criticised, but despite their anonymity they are vouched for by the editor of the journal. This allows both the author being reviewed and those reading the journal to assume the work is sound and hence authoritative. From this perspective, critics of BlogSyn seem to be on to something.
These two cases highlight the strengths and weaknesses of peer review. The inclusion of TCM in a chemistry paper is startling because it's the exception; peer review is generally good at ensuring that published work is logically coherent, informed by the literature, and supported by evidence to back up its claims. On the other hand, BlogSyn highlights that peer-review in chemistry routinely fails to assess the key element of science: reproducibility.
There are good reasons that peer review does not assess reproducibility in organic synthesis. Reviewing is time-consuming, and adding more time to that to prepare reagents, fiddle with conditions to get them to work, and so on, would cost the reviewer time and money and delay publication. It seems unlikely that we'll be seeing routine replication in chemistry any time soon.
BlogSyn represents one approach to solving this problem. If pre-publication review is impractical, perhaps open, online, post-publication review of work that has had an impact is a solution.
The question, then, is can we trust BlogSyn? Is it authoritative? How should we assess it relative to a journal?
I suggest that we can trust BlogSyn more than the average peer-reviewed paper, provided we assume honesty on the part of the authors.
While there are exceptions, most peer-reviewed synthesis papers include relatively sparse experimental details due to constraints of space and the need to be easily legible. BlogSyn, on the other hand, takes an open-notebook format in which one can see every TLC plate, every NMR, and multiple repeats of the same reaction by different authors. This allows for a much more direct assessment of the competence of the chemists, the chance to pick up minor mistakes that aren't evident from a written summary, and a direct demonstration of reproducibility. Hence, if we trust the authors, I consider BlogSyn to be a more authoritative account of an experiment than a typical peer-reviewed paper.
Ought we to trust the authors? This brings us back to the problem of pseudonymity. Personally, I know several of the authors of BlogSyn to be people of integrity, and to have persistent identities online. Not every reader of BlogSyn has this knowledge, and they are probably justified in being skeptical. How can BlogSyn achieve a degree of authority which is acceptable to the average chemist, who may not be part of the online community?
No easy or ideal solutions are forthcoming, and it's likely that no solution would satisfy everyone. However, until BlogSyn can gain some legitimacy in the broader community it's hard to see how the project can flourish.
Two solutions spring to mind, and I welcome your criticism and suggestions. The first is to slog it out and establish credibility the hard way: continue to critique work from the literature, build up reputation by engaging with authors and marketing the project, and hopefully gain some acceptance. Alternatively, if an established chemist with a good reputation in the community is willing to verify the identity of, and vouch for the integrity of, the authors of BlogSyn, this may serve to lend legitimacy to the project by association. This third party would be fulfilling a role analogous to that of the journal in traditional peer review.
What other ways might BlogSyn gain credibility? Are there practical ways to solve issues of reproducibility pre-publication? Leave your thoughts below...