Thursday, December 23, 2010

Is our grant peer-review system the "best of all evils"?

'Tis the season. No, not that one- like every year at this time (and two other times each year- once in May and once in September), many of us get a stack of grant proposals to review. I just got my stack yesterday and haven't glanced at them yet. For those of you outside the science research community, researchers spend months preparing these proposals (12-15 pages in length, plus several more pages of budget, resume, and other nuances). They agonize over minutia like whether to cite X or Y paper, whether to talk about contingency plans in detail or to present an air of confidence in the approach, whether to present the fine experimental details or save space for more justification on the value of the study to the field, etc.

After these months of work, three or more other anonymous scientists ("reviewers"/ "panelists") read over the proposals- rarely spending more than a few hours on any one in particular (and frighteningly sometimes less). They write a critique- strengths and weaknesses of the work (kudos to NIH for forcing reviewers to actually mention the strengths). The work is discussed in a meeting with ~20 other scientists who have not read the proposal. And finally, it's given a score. Then, in large part using that score, the funding agency decides whether to fund the research or not. Odds of funding are around 10% (don't nitpick- this is close enough).

Most of the ones I read are from people like me- university professors. So, my question is this- is this really a good use of our time? And if not, is there a viable alternative, or is this "the best of all evils"? Let's think of how many person-hours are used in proposal preparation and proposal review. Some of it is definitely worthwhile- forcing the scientist to carefully plan his/ her research and think of what would be the highest impact rather than status quo. The presentations often help guide the subsequent publications in thoughtful ways. Similarly, reviewers benefit by getting a broader view of the field and what kinds of research people are doing.

But there are downsides, even beyond the time commitments of all the parties involved. We all know a few outstanding scientists who seem incapable of writing decent proposals for many different reasons. We also all know that there are some who are great "salespeople" but may be lacking on scientific follow-through or rigor. The present system strongly favors the latter over the former. There's also a general "risk aversion" that comes across in peer review, despite extensive efforts by funding agencies to prevent this. But, if only ~10% of proposed research will be funded (for reference, in a two-day grant review meeting, this may be ~6 proposals), reviewers sometimes have a hard time suggesting that something that "would be very cool but might not work" get funded over something that "would be kind-of cool but will for-sure work." Peer review is also far from perfect- many good proposals get sunk because a reviewer misunderstood something or had a different perspective on what is useful than the proposer. Finally, and frighteningly, university promotion and tenure decisions often seriously consider funding success- seeing it as a quick surrogate to how the community views one's research.

Scientific publishing has gone through major revolutions in the past decade, first with a push for "open-access", and more recently a decision that the merits of science should be sometimes judged historically (the essence of PLoS One): all work that is scientifically valid should be published and history will decide what was most useful. Obviously, public funds (remember- the public funds all this with taxes!) are more precious than whether something is published or not, so a direct analogy is not possible. But is there another way? Or is this really the best of all evils?


8 comments:

  1. Hey Mo-
    Having just served on my first funding panel, I gotta tell you: panelists need the reviews! We need folks that are "experts" (familiar with the proposed research) to comment on proposals. As a panelist, after making my review, I carefully read the comments of the reviewers. Some of the reviewers were spot on, some weren't, and as panelists we discussed any discrepancies and outliers. One reviewer did not usually sink a proposal, but one could push a proposal into that "Outstanding" category.

    I've also reviewed once for NSF. I did find it informative even if it did take a bit of my time.

    All in all, although my experience is limited, I do think reviewing proposals is a good use of our time.

    ReplyDelete
  2. Being a non-academic type I don't know how folks are chosen to to review proposals. I also don't know how large your stack of proposals is, so I say if it's too large for you to give each proposal a thorough examination. Being a manager of technology and computer systems I am always thinking about ways to maximize man hours and reduce waste. Could an online mechanism help by doling out the proposals to a pre-selected database of eligible reviewers? Sort of an online document management system that a reviewer would login to and manage his/her proposals in a queue? The reviewer could pass on the ones he doesn't know much about, vote on the ones he does and or communicate with other reviewers to "trade" proposals to review (assuming there would be a quota that each reviewer must meet).

    Anyway, I know very little about how the process works, but I know that technology (and the Internet) can help simplify processes like this one. My three cents...

    ReplyDelete
  3. Two posts copied from FaceBook comments to this entry:

    From David S.

    hmmm...i think it is, as i don't know what the alternative "best" would be. in a way, it is good that other systems exist, thus finding homes for excellent researchers who are poor grant writers....thinking of the more tiered system in Europe and Asia, and perhaps NIH here. It is a difficult question/problem.


    From Jerry C.

    I think we should have a modification of the Canadian system. Beginning researchers get a modicum of cash, and then the pot is refilled if they do well and publish. No need for a "research plan".

    ReplyDelete
  4. No matter the system, those who are relatively less successful will have supposedly objective reasons another system is better. Those who make out okay will say best-of-a-bad-lot. Period.

    ReplyDelete
  5. Who benchmarks the panels by what standards? Who is good at predicting discovery versus a statistical random sample? @drugmonkey: imho the question is not if it's fair to measure the applicants, but how to measure the judges?

    ReplyDelete
  6. All good points from everyone.

    One big problem about variance in review quality is that there's no reward for doing an especially good job. If you spend 6 hours reviewing each grant, thinking about the context, looking up citations, etc, you get no more reward (aside from satisfaction of a job well-done) than someone who flipped over each grant over lunch & scratched some quick notes, or worse- no more reward than the fellow who refused to review altogether. Trouble is the funding agencies sponsoring the review process have nothing else to offer- they'd have to cut funding further if they offered money or promises of grants to good reviewers.

    The other big problem is sheer scale. Many fields are growing in numbers of people. However, they're also growing in expectation- many people at research universities are supposed to publish not one paper but many papers and secure many grants, to be labeled "successful" and get raises/ tenure. They knowingly aim some of their papers too high (e.g., Nature, Science) for a long-shot at acceptance, meaning that the paper likely ends up getting reviewed at multiple journals by multiple reviewers. These have always been issues, but I think they're bigger issues now than in the past. That's straining the system- the number of manuscripts and proposals each of us has to review to sustain all this has increased enormously.

    So, I like some of the essence of Jerry C's idea (adding to it that some measure of research quality should be evaluated besides just publication, but I'm sure he meant that). It would simplify the review process and decrease the amount of time on proposal preparation. We would, however, lose the enforced thought process associated with proposal preparation, and the useful experience associated with proposal review (which Jessica noted). Rather than a "solution", maybe it's a starting point for thinking about the problem and possible related solutions...?

    ReplyDelete
  7. rc, I think one of the larger problems in the water-cooler-BS-session analysis is our inability to grasp that larger funding agencies have a whole host of goals and intentions, some of which may even be contradictory. So coming up with a universal measure of the quality of review is impossible.

    ReplyDelete
  8. @drugmonkey- just read your September 4, 2010, post- very relevant and well-considered!

    ReplyDelete