There has been a big jump in paper retractions in science over the last 20 years.
What’s going on?
How do the trends in the stem cell field compare to other fields like cancer research and more broadly?
Is AI going to change the dynamic between those engaging in misconduct and those who are looking for misconduct in papers?
Retractions have spiked overall
When I started as a professor it was very rare for papers in my two main fields, stem cells and cancer, to be retracted. Going further back, all the years I was a postdoc, I don’t recall hearing about any papers being retracted. Admittedly, I wasn’t thinking about this issue.
At the same time, retracted papers were just extremely rare.
Now it’s not so unusual.
Importantly, while some retractions are due to inadvertent but serious errors or problems, others are the result of misconduct. So a retraction does not mean there was misconduct. For an example of two polar opposite retraction cases see: A tale of two stem cell retractions.
Whatever the issues with specific papers, the move upward in annual retractions shows no sign of abating.
For context, Ivan Oransky of Retraction Watch wrote last year in Nature about the increase in retractions. What is Retraction Watch? It’s a helpful site that tracks paper retractions and has a database of their findings that is publicly available.
Stem cell retractions
Let’s look at some of that data. See the three graphs included in this post on annual retractions.
Annual stem cell paper retractions are way up, but then so are cancer-related retractions and just retractions overall. The three are trending upwards in very similar ways.
I also tried using PubMed to get retraction data. However, the data there were harder to capture in a clear consistent way. PubMed actually has filtering tools to retrieve just retracted papers (which I didn’t know), but the search gave conflicting results depending on how I did it.
From the Retraction Watch database, I also get a higher number of stem cell paper retractions versus what I see on PubMed no matter how I do the searching.
Note that I’m not that familiar with the Retraction Watch database yet so there may have been better or more systematic ways to capture data. For that reason consider the data here cautiously. It’s just a first stab to get trends. For example, my searches were based on a limited number of key title words in papers, which likely missed many papers in the general areas of stem cell or cancer research.
Why more retractions?
Retraction Watch reports that retractions relative to total publications have been going up steadily too. The rate looks similar to what I found. Maybe not quite as fast.
So why all the retractions? One big change is that science is doing a far better job of detecting paper issues.
Scientists and others acting as sleuths of a sort have been collectively devoting a far greater amount of time and energy to scrutinizing papers. They may be getting better at it too.
It’s still a somewhat thankless marathon.
Probably the most well-known misconduct researcher is Elizabeth Bik. She works on this full-time.
A high-profile ongoing case
When papers with potentially serious problems are identified, what happens next is often very complicated. Which authors are responsible? Are the original data available? Institutions sometimes announce investigations but then either nothing comes of those or they conclude no one did anything wrong even if the paper(s) in question have big problems. Some journals don’t help the situation. These processes can also take many years during which things remain unclear.
For example, the highest profile situation unfolding right now remains up in the air and may continue that way for some time. It relates to some apparently problematic papers with Stanford President Marc Tessier-Lavigne as an author, sometimes the senior author. Bik has had a role in analyzing these pubs. Stanford is investigating.
A recent report from the school newspaper The Stanford Daily included more paper issues from T-R’s time at Genentech.
It’s unknown if any of the papers will ultimately be retracted or if any of this will impact Tessier-Lavigne’s future at Stanford. We don’t know if he had a direct role in any misconduct so caution is needed.
Correction as an out instead of retraction
In the bigger picture, the retraction surge would likely be even more pronounced if some seriously flawed papers that should have been retracted were not corrected instead.
A paper correction can be an appropriate tool for a situation with one or a limited number of small or medium issues with papers. Yet corrections are being misused by certain journals sometimes as a way to handle papers with misconduct severe enough that it warrants retractions. Why? Journals don’t want to be known for having a lot of retracted papers. Misusing the correction option is not good for science though.
Now and looking ahead including possible AI arms race
Looking to the future, there has been discussion of how AI could fuel a kind of arms race in the misconduct and retraction area.
On the one hand, sleuths armed with AI may more readily find majorly flawed papers. Automation could make identifying paper issues much less laborious. Specific potential cases could then be evaluated by people.
At the same time, those researchers intent on changing data for their papers could use AI to that end. They might even be enabled to produce entirely fictitious data or figure panels. Something like ChatGPT could write the text too.
How would that be detected? By AI designed to look for it?
It’s easy to see how this could spiral into an “arms race” with major headaches for science.