Skip to main content

Open access is not the problem – my take on Science’s peer review “sting”

Michael Eisen, Professor of molecular and cell biology | October 4, 2013

In 2011, after having read several really bad papers in the journal Science, I decided to explore just how slipshod their peer-review process is. I knew that their business depends on publishing “sexy” papers. So I created a manuscript that claimed something extraordinary – that I’d discovered a species of bacteria that uses arsenic in its DNA instead of phosphorus. But I made the science so egregiously bad that no competent peer reviewer would accept it. The approach was deeply flawed – there were poor or absent controls in every figure. I used ludicrously elaborate experiments where simple ones would have done. And I failed to include a simple, obvious experiment that would have definitively shown that arsenic was really in the bacteria’s DNA. I then submitted the paper to Science, punching up the impact the work would have on our understanding of extraterrestrials and the origins of life on Earth in the cover letter. And what do you know? They accepted it!

My sting exposed the seedy underside of “subscription-based” scholarly publishing, where some journals routinely lower their standards – in this case by sending the paper to reviewers they knew would be sympathetic – in order to pump up their impact factor and increase subscription revenue. Maybe there are journals out there who do subscription-based publishing right – but my experience should serve as a warning to people thinking about submitting their work to Science and other journals like it. 

OK – this isn’t exactly what happened. I didn’t actually write the paper. Far more frighteningly, it was a real paper that contained all of the flaws described above that was actually accepted, and ultimately published, by Science.

I am dredging the arsenic DNA story up again, because today’s Science contains a story by reporter John Bohannon describing a “sting” he conducted into the peer review practices of open access journals. He created a deeply flawed paper about molecules from lichens that inhibit the growth of cancer cells, submitted it to 304 open access journals under assumed names, and recorded what happened. Of the 255 journals that rendered decisions, 157 accepted the paper, most with no discernible sign of having actually carried out peer review. (PLOS ONE rejected the paper, and was one of the few to flag its ethical flaws).

The story is an interesting exploration of the ways peer review is, and isn’t, implemented in today’s biomedical publishing industry. Sadly, but predictably, Science spins this as a problem with open access. Here is their press release:

Spoof Paper Reveals the “Wild West” of Open-Access Publishing

A package of news stories related to this special issue of Science includes a detailed description of a sting operation — orchestrated by contributing news correspondent John Bohannon — that exposes the dark side of open-access publishing. Bohannon explains how he created a spoof scientific report, authored by made-up researchers from institutions that don’t actually exist, and submitted it to 304 peer-reviewed, open-access journals around the world. His hoax paper claimed that a particular molecule slowed the growth of cancer cells, and it was riddled with obvious errors and contradictions. Unfortunately, despite the paper’s flaws, more open-access journals accepted it for publication (157) than rejected it (98). In fact, only 36 of the journals solicited responded with substantive comments that recognized the report’s scientific problems. (And, according to Bohannon, 16 of those journals eventually accepted the spoof paper despite their negative reviews.) The article reveals a “Wild West” landscape that’s emerging in academic publishing, where journals and their editorial staffs aren’t necessarily who or what they claim to be. With his sting operation, Bohannon exposes some of the unscrupulous journals that are clearly not based in the countries they claim, though he also identifies some journals that seem to be doing open-access right.

Although it comes as no surprise to anyone who is bombarded every day by solicitations from new “American” journals of such-and-such seeking papers and offering editorial positions to anyone with an email account, the formal exposure of hucksters out there looking to make a quick buck off of scientists’ desires to get their work published is valuable. It is unacceptable that there are publishers – several owned by big players in the subscription publishing world – who claim that they are carrying out peer review, and charging for it, but not doing it.

But it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study didn’t do the control of submitting the same paper to subscription-based publishers (UPDATE: The author, Bohannon, emailed to say that, while his original intention was to look at all journals, practical constraints limited him to OA journals, and that Science played no role in this decision). We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper (it has many features in common with the arsenic DNA paper after all). Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity. When Elsevier and other big commercial publishers pitch their “big deal”, the main thing they push is the number of papers they have in their collection. And one look at many of their journals shows that they also will accept almost anything.

None of this will stop anti-open access campaigners  (hello Scholarly Kitchen) from spinning this as a repudiation for enabling fraud. But the real story is that a fair number of journals who actually carried out peer review still accepted the paper, and the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. Any scientist can quickly point to dozens of papers – including, and perhaps especially, in high impact journals – that are deeply, deeply flawed – the arsenic DNA story is one of many recent examples. As you probably know there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work.

And the real problem isn’t that some fly-by-night publishers hoping to make a quick buck aren’t even doing peer review (although that is a problem). While some fringe OA publishers are playing a short con, subscription publishers are seasoned grifters playing a long con. They fleece the research community of billions of dollars every year by convincing them of something manifestly false – that their journals and their “peer review” process are an essential part of science, and that we need them to filter out the good science – and the good scientists – from the bad. Like all good grifters playing the long con, they get us to believe they are doing something good for us – something we need. They pocket our billions, with elegant sleight of hand, then get us to ignore the fact that crappy papers routinely get into high-profile journals simply because they deal with sexy topics.

But unlike the fly-by-night OA publishers who steal a little bit of money, the subscription publishers’ long con has far more serious consequences. Not only do they traffic in billions rather than thousands of dollars and deny the vast majority of people on Earth access to the findings of publicly funded research, the impact and glamour they sell us to make us willing participants in their grift has serious consequences. Every time they publish because it is sexy, and not because it is right, science is distorted. It distorts research. It distorts funding. And it often distorts public policy.

To suggest – as Science (though not Bohannon) are trying to do – that the problem with scientific publishing is that open access enables internet scamming is like saying that the problem with the international finance system is that it enables Nigerian wire transfer scams.

There are deep problems with science publishing. But the way to fix this is not to curtail open access publishing. It is to fix peer review.

First, and foremost, we need to get past the antiquated idea that the singular act of publication – or publication in a particular journal – should signal for all eternity that a paper is valid, let alone important. Even when people take peer review seriously, it is still just representing the views of 2 or 3 people at a fixed point in time. To invest the judgment of these people with so much meaning is nuts. And it’s far worse when the process is distorted – as it so often is – by the desire to publish sexy papers, or to publish more papers, or because the wrong reviewers were selected, or because they were just too busy to do a good job. If we had, instead, a system where the review process was transparent and persisted for the useful life of a work (as I’ve written about previously), none of the flaws exposed in Bohannon’s piece would matter.

Cross-posted from Michael Eisen’s blog, This is NOT Junk

Comments to “Open access is not the problem – my take on Science’s peer review “sting”

  1. One way to fight back against the peer review system is by using the system against itself. Since the peer review system denies publication to anything that has already been published elsewhere, the trick is to anticipate the next modification within a mainstream paradigm (that is, the next “normal science” modification, not the next paradigm shift) and publish it before it gets published in peer review. That mainstream theory will then be denied peer review publication!

    This does not require stealing any information, but can be done perfectly legally — by knowing your enemy and taking the “devil’s advocate” point of view. If you can already anticipate what “modifications” will be published within a mainstream paradigm next (that is, if you have successfully predicted such “modifications” before), then just start self-publishing what you expect the mainstreamists to claim next. If you can imagine not only one but a few alternatives of such, publish them all. That is, publish those that could be expected to be modifications within a mainstream paradigm, while preferably keeping those that could constitute paradigm shifts under your hat.

    And self-publish it openly accessible to anyone with Internet (e.g. blogs, forums, social media, wikis and so on) to increase the likelihood that the “redundant publication” detector of the peer review system will detect it (and thus deny it peer review publication). Publication on less accessible parts of the Internet (such as ebooks or freenet) is not as efficient since it decreases the chances of detection, and physical books are even less efficient.

    If you are not so good at predicting how the mainstream paradigms will “modify” their theories next (that is, if you either never listen to or read what the mainstreamists actually say, or are taken completely by surprise most or all times you actually do), you should read their publications to know your enemy and become better at anticipating what they may claim next.

    When many people do this and mainstream paradigms are thus prevented from getting their “modifications” published in any respected peer review journals, the tyranny of the peer review system will at last lose its support and be deposed.

  2. Your piece reminds me of criticism of a measurement of multi-photon free-free absorption by a smart-guy theorist who told the experimentalist: you have merely verified the Bessel function.

    Of course you are finding that referees and editors are of easy virtue. Why? Because scientific publishing depends on trusting an author to take the primary responsibility for honest and careful work. Your reputation and the reputation of your institution play the major role. Try submitting a brilliant paper under an assumed name using your home address. Also try, as another reader suggested, submitting something unorthodox such as Willis Lamb’s paper that the photon does not exist and is still around because of follow-the-leader physics. Lamb’s paper was accepted (in a minor journal) because he is a Nobelist and a great physicist to boot.

    You have discovered what every seducer knows: that his social rank and wealth are the essence of his desirability. I think you are misguided to attack on-line publishing. Let’s let everyone have the uncensored wad in order to separate the wheat from the chaff.

  3. You think the science peer review process is a joke…you should look at the peer review process within the major finance/economics journals and their affiliated universities. Journals that for decades allowed the Chicago School ,and their associates at major econ depts in the US-yes including Berkeley, ideology hucksters to convince policymakers that financial regulation was the single greatest threat to the long term economic well being of the US economy.

    These journals passed most of this research off as scientific fact under a peer review process that eagerly rubber stamped any research, no matter how flawed, that extolled unregulated and unfettered free markets as the solution to all of societies ill’s. A few financial crisis later, not only are the some deregulation hucksters now not exposed as complete frauds a number of them were actually named to key cabinet positions within the Obama Administration, serve on corporate boards and consult for big bucks to the IMF/World bank. These hucksters push there wares globally. I guess that why its called globalization. 🙂

    My point the huckster academic community is alive and well and significantly worse within the Finance|Economics world than in the science fields as the peer review process has been used as a means to advance only the free market ideology,theology, at major journals while at the same time suppressing honest debate and the free exchange of ideas. You may complain about the lack of review within the science community that allows in your opinion flawed research to move forward. Problem is that in the econ|finance fields you have the completely opposite problem that is no paper , no matter how well done, that is critical of the new free market- neo-liberal order will ever be published in any top finance or economics journal as the peer review process is specifically in place to suppress intellectual discourse and dissent.

    As an economic historian , I would rather be your boat than mine.

    • “no paper , no matter how well done, that is critical of the new free market- neo-liberal order will ever be published in any top finance or economics journal as the peer review process is specifically in place to suppress intellectual discourse and dissent”
      The AER is considered one of the most prestigious journals in economics. You can see some notable papers they published on wikipedia:
      The evidence does not support jose camacho’s statement.

  4. Now we have real and proven data and result of the quality of SOME OA journals (neglecting the fact that it was not compared with similar subscription based journals and other weakness of this study). Even though this experiment is ‘not perfect’, but I am so happy to see that it throwing light on the quality of ‘screening and peer review service’ of OA journals. I strongly believe that the scholarly publishers should work like ‘strict gate keeper’ by arranging honest and sincere peer review service. This is the main difference between a ‘scholarly publisher’ and a ‘generic publisher’ (who publishes anything). Other works like typesetting, proofing, printing, web hosting, marketing, etc are important but not unique for a scholarly publisher. (My personal opinion is that we should not waste time by debating–Good OA, –bad OA–good CA–bad CA, etc. It is the time to work. We must jump to more effectively analyses and use these huge precious data)

    I know and strongly believe that Beall, being an academician-librarian, also gives the highest importance to this particular criteria than any other thing. I congratulate Beall that his theory has been experimentally proven by the Sting Operation of Science.

    I know this sting operation is going to generate a huge debate and one group will try to find out the positive points and other group will try to prove it as bogus experiment. A simple endless and useless fight and wastage of time. It will be more important to find out some way to use this huge data more effectively.

    Now I have some suggestions and questions
    1. How are we going to use these huge data generated by this year long experiment?
    2. I request DOAJ, OASPA to do some constructive works by using this huge data.
    3. Can we develop some useful criteria of screening OA publishers from the learning of this experiment?
    4. Is there any way of rewarding the publishers, who properly and effectively passed this experiment (rejecting the article by proper peer review. I noticed some journals rejected due to ‘scope-mismatch’. It is definitely a criterion of rejection. But it does not answer, if scope was matched, what would happen).
    5. I saw the criticism of Beall that ‘he is ..trigger-happy’. It is now time for Beall to prove that he not only knows to punish the bad OA, but he knows to reward also somebody, if it intends to improve from the previous situation. Is there any possibility that this data can be used for the ‘appeal’ section of Beall’s famous blog. Sometimes judge can free somebody depending on the circumstantial proof, even if he/she does not formally appeal. (Think about posthumous award/ judgment.) I always believe that ‘reward and inspiration of good’ is more powerful than ‘punishment for doing bad’. But I also believe that both should exist.

    If anybody tells that “The results show that Beall is good at spotting publishers with poor quality control.” Then it tells one part of the story. It is only highlighting who failed in this experiment. It is not telling or highlighting about those publishers who passed this experiment but still occupy the seat in Beall’s famous list. I really hate this trend. My cultural belief and traditional learning tells me that “if we see only one lamp in the ocean of darkness, then we must highlight it, as it is the only hope. We must protect and showcase that lamp to defeat the darkness”. I don’t know whether my traditional belief is wrong or right, but will protect this faith till my death.

    I really want to see that Beall officially publishes a white-list of ‘transformed predatory OA publishers’, where he will clearly write the reasons of its removal from the ‘bad list’. So, that from that discussion other lower quality predatory OA publishers will learn how to improve (if they really want to do so) and will learn how to get out of Beall’s ‘bad list’. This will step will essentially complete the circle, Beall started.

    Ideally I STRONGLY BELIEVE that Beall will be the happiest person on earth if in one fine morning Beall’s list of ‘bad OA publishers’ contains ‘zero name’, by transferring them to Beall’s list of “Good OA publishers” by transforming them with the help of effective peer review process initiated by Beall.

    Akbar Khan,

  5. Thanks enjoyed the article.The continuing issue is why we are no longer able to research and evaluate in a non biased manner and accept the results, even if they do not fit our own perceptions or the groups perceptions or the money making interests involved or related to the issue being evaluated and researched. Quite the compromise when we do not remain neutral.

  6. Every system will have it’s own natural margins………..for no system can ever be perfect and that is where lies the key to progress of the human race. So, let’s give space for all the systems to flow……….instead of trying to stifle the progress in any manner that including sting operations
    too. Be content with the (sting) results only without passing a judgement and trying to influence others thinking for every one must have space to move and explore. Freedom is after all freedom and nothing less than that. We are evolving all the time. Hence, the unwanted findings reported by John Bohannon simply means we are in the right track of progress: like Professor Michael Eisen has brought in a respectable change in the OA- Scientific Publication today

  7. The issue of reviews and reviewing is as old as the peer anonymous reviewed journal itself. Usually the objection is that anonymous reviewers are too critical. I think Jerzy Neyman, the eminent Berkeley statistician, once lamented that a there is something that turns a decent, pleasant, fair human being into a s.o.b. once s/he becomes an anonymous reviewer. His frustrations eventually led him to found a new Statistics journal.

    What about the issue of reviewer bias? My recollection is that one investigator some years ago submitted to social science journals papers using exactly the same methodology but reporting opposite results. The acceptance rate for those reaching so called “politically correct” conclusions was far higher than those refuting them, leading to howls of outrage about the investigator’s wasting people’s time.

    One possible way of avoiding myriad problems with reviewers is to ask reviewers to surrender anonymity, and for editors to reveal whom they consulted or chose to review a paper. At least things would be out in the open, and an editor could not use non-existent reviewers.

  8. In my experience they’ll publish almost anything, as long as you’re willing to pay through the nose to get them to do so. They generally only publish this stuff on line, so really it costs them next to nothing to do so. They then charge incredulous amounts of money to access some of these articles. They’re making money both ways on this racket.

    What is most absurd is that they supposedly have such a good name, that people go for it. I am an independent researcher, so I’d have to pay my own fees….I’ve had articles accepted a number of times (and never had one rejected) but I’ve always balked at paying their exorbitant rate. It is a SCAM and then so….but it’ll keep going on, as long as so many keep on paying them.


    To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

    But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based “Gold OA”) is premature, as are plans by universities and research funders to pay its costs:

    Funds are short and 80% of journals (including virtually all the top, “must-have” journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

    What is needed now is for universities and funders to mandate OA self-archiving (of authors’ final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online (“Green OA”).

    That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

    The natural way to charge for the service of peer review then will be on a “no-fault basis,” with the author’s institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

    That post-Green, no-fault Gold will be Fair Gold. Today’s pre-Green (fee-based) Gold is Fool’s Gold.

    None of this applies to no-fee Gold.

    Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume — without testing it — that non-OA journals would have come out unscathed, if they had been included in the sting.

    But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

    Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record — and without paying an extra penny.

    But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA — which, with Green OA, are identical to those of non-OA.

    For some peer-review stings of non-OA journals, see below:

    Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

    Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

  10. The flip side of this is that valid unorthodox papers get immediately rejected by editors, because they are unwilling or unable to find reviewers who can engage the valid science underlying the unorthodox assumptions.

Comments are closed.