September 26, 2020

The Niche

Knoepfler lab stem cell blog

Transformative idea for peer review: reviewing & grading the reviewers

reviewerDo you feel frustrated with the current peer review system in science?

I have an idea that might help and it involves the revolutionary idea of reviewer accountability.

In other words, authors and grant writers in essence review their reviewers.

I’ve made this all the easier for you with templates that you can simply and quickly fill in and then submit regarding a paper review to the journal editor or regarding a grant review to the funding agency.

Ever wonder what the heck a reviewer was thinking when you got back a review of a paper or grant, but felt helpless other than perhaps considering complaining to a journal editor or funding agency official? Hesitated because you didn’t want to seem like a “trouble maker”?

I don’t mean the usual angst that might accompany a negative outcome. Rather I am more speaking to reviews that are just downright incompetent and unscientific or vindictive.These seem more and more common these days.

Whether it was a review of a grant or a paper, I think most of us have found ourselves in a position where we just felt the reviewer was lousy. Even a few days later after calming down we still believed the reviewer did a bad job to put it simply.

What can we do?

Up until now, not much other than complain.

Perhaps the reviewer did not read the grant or paper carefully, perhaps they were out to kill it, or perhaps for some unknown reason they just did a terrible job.

Shouldn’t such a reviewer be held accountable for that?

Right now, they aren’t.

To my knowledge journals do not track the behavior and competence of their reviewers, but I believe they should. Perhaps editors of journals and grant review officials informally think about and are concerned about reviewers who are bad actors, but only in the rarest of circumstances do they ever do anything about it.

When paper writers or grant proposal submitters complain, they are viewed skeptically and often harshly. Plus, you might ask yourself, how do I even go about properly providing feedback on a reviewer? Who knows.

Therefore, I propose a new, simple system whereby journals and funding agencies keep score over time of how good a job that their specific reviewers do.

They do this based on quantitative feedback from us!

Yes, you read that right.

Reviewers get reviewed. They are held accountable. They get scored.

In such a system, the reviewers review papers or grants as usual, but at the end of the process, say 3 days or 1 week later, the grant applicants or submitters of papers in turn return scores ranking how good a job they think the reviewers did.

Basically review becomes a two-way street.

Yeah, you might say, great idea, but won’t the recipients of reviews almost always harshly grade the reviewers, particularly if a grant is unfunded or paper rejected?

Overall, I don’t think so.

Often times I myself get upset about a negative outcome from a review process whether it is a grant or paper, but in many cases after I calm down I start to see in some reviews that the reviewers made some good points and actually put some work into being a good reviewer. In other words, they were competent. They read the grant or paper. They thought about it. I appreciate that.

In contrast, other reviews are clearly incompetent or have ulterior motives that are all too obvious.

The reason for the 3-7 day post-review waiting period before the grant or paper submitter returns a reviewer assessment score is to give them time to calm down and think it over. This new system I am proposing is not for the purpose of venting hard feelings, but rather for providing data to journals and grant funding agencies to help them determine their best and worst reviewers.

Over time journals and funding agencies such as NIH would start to see patterns of reviewer scores indicating, I would argue, who the good reviewers are and who the not so good reviewers are.

I propose that reviewers consistently receiving say the bottom 10-20% scores would get the boot.

They literally would no longer be invited to be reviewers say for a period of a year. In other other words, reviewers would be held accountable for how good a job they do. Such scores might even come to be part of tenure and promotion packets with faculty indicating proudly (as the case might be) their relatively positive scores as a reviewer.

Sure, you might say, but what about the fact that funding agencies and journals need all the reviewers they can get?

Perhaps, but eliminating 10-20% of the worst performing reviewers would not make much of a dent in the overall pool and would dramatically improve the review system.

Why?

I think it would improve things because even though reviewers would remain anonymous to applicants and paper submitters, as many people (but not all) believe to be important, the reviewers would nonetheless be held accountable for the job they did with consequences! As a result, I believe reviewers would start to take the review process more seriously and be far less likely to behave badly as the classic “reviewer #3” or as I called it “Dr. No“.

I also believe this system would be great for younger scientists including postdocs and students because incorporated into it could be a way for them to get scoring as reviewers even if they worked with their PI to review a paper. Over time such young scientists might grow to be trusted and productive reviewers for editors on their own account even before they officially transition to independence. Thus, such a system might increase the number of vetted reviewers.

Sure, you say, but editors and grant agencies will resist adopting your system because it is more work for them and generally people avoid change.

You are right that there may be some resistance, but I propose that grant submitters and paper submitters simply start sending feedback to funding agencies and journal editors whether they ask for it or not. Over time I think they’ll start using that data to evaluate reviewers. You can use my handy forms that I made and it shouldn’t take more than a few minutes. Then simply include it post-review.

If you feel extra strongly about it, you might consider telling editors you won’t be submitting papers to their journal any more if they do not allow feedback on their reviewers.

The bottom line is that now too many reviewers do a crappy job and are not held accountable, which overall is a fundamental weakness of our current overall system of science.

I can’t think of a more troubling weakness in science today in fact.

I firmly believe that my proposed reviewer scoring system would transform the process for the better.

%d bloggers like this: