Showing posts with label publishing. Show all posts
Showing posts with label publishing. Show all posts

Sunday, November 27, 2011

Publication/ Dissemination

I just finished a full draft of my book-- it's basically a "how-to" guide for new academic faculty on their broad yet poorly defined jobs. A lot of the posts in my blog here have been inspired by working on particular associated chapters (e.g., "Teaching", "Managing Others"). It's been a really fun project putting this all together, and I've had stimulating conversations & exchanges with many people about some of the issues raised. As I am about to send the draft off to the publisher (Sinauer) for review, I have to think-- how many people will buy this? Should I have just put it all as a pdf on the web?

The underlying question is broader and relates to how we disseminate knowledge and ideas in science today. I would have been OK with just putting the entire treatise on my website, but my assumption is that very few people would have ever downloaded and read it. My thought was that sending it to a book publisher gives it a "seal of approval" of sorts, demonstrating that someone else thought this collection of ideas was worthwhile. Indeed, Andy Sinauer will certainly send it to reviewers, just as he did with the initial book proposal, and I'm sure I'll be asked to revise based on their feedback. The downside to getting this seal of approval is simple-- rather than being able to get these ideas for free, anyone interested would then have to pay money, hence reimbursing Andy for costs and hopefully providing a small profit. Both e-copies and hard copies will be available, and although both will certainly be cheap (assuming the book passes peer review), neither will be free. Hence, my assumption is that more people will read the contribution when they have to pay for it than if they downloaded it for free on my website. Yes, I'm ignoring the obvious fact that Sinauer will likely do more marketing of the contribution than I would have if it were merely posted on my website.

The analogy to scientific publication is obvious-- I can post results of scientific experiments on my website, but no one would pay them any mind relative to whether I published them in an outlet requiring both peer-review and expense (by me if open access, by subscribers/ universities otherwise). Thus, we put great emphasis on peer review, to the extent that we don't feel contributions without it have much value. The irony is that we all complain incessantly about the rigor of peer review-- how poor products slip into top journals while excellent contributions are delayed or rejected because of unreasonably high expectations by reviewers.

But are the times changing? Blogs, tweets, etc., abound, and PLoS One changed the face of science publishing by removing the emphasis from reviewers' assessment of value to only their assessment of execution and description. What if scientific results and their associated discussions were posted in bulletin-board formats? Is the world ready for such as a means of dissemination and acceptance, or would such results be presumed to be overstated and/ or flawed? There's already so much assumption of honesty in science (e.g., we don't demand that results be replicated by other teams before publication)-- is this really such a big leap? Indeed, arXiv already publishes unrefereed works for math and physics. Many journals also already allow commenting on studies-- maybe this will eventually morph into a Yelp-like review system of contributions:

-From Jones: 5-stars to the Smith lab for output #42431-- Really loved the rigor of their demonstration of speciation by reinforcement in Drosophila ananassae. The angle with looking at relative abundance of the two species was an excellent addition and sealed the result.

-From Clark: 3-stars to the Smith lab for output #42431-- Loved the study's execution, but they failed to cite two other papers that used the same approaches.

As always, comments very welcome. And please do wish me luck with the book-- I'm hoping the reviewers don't trash it because they disagree with specifics I suggested but instead see it as it (and this blog) is intended: "a starting point for discussion."


Thursday, December 23, 2010

Is our grant peer-review system the "best of all evils"?

'Tis the season. No, not that one- like every year at this time (and two other times each year- once in May and once in September), many of us get a stack of grant proposals to review. I just got my stack yesterday and haven't glanced at them yet. For those of you outside the science research community, researchers spend months preparing these proposals (12-15 pages in length, plus several more pages of budget, resume, and other nuances). They agonize over minutia like whether to cite X or Y paper, whether to talk about contingency plans in detail or to present an air of confidence in the approach, whether to present the fine experimental details or save space for more justification on the value of the study to the field, etc.

After these months of work, three or more other anonymous scientists ("reviewers"/ "panelists") read over the proposals- rarely spending more than a few hours on any one in particular (and frighteningly sometimes less). They write a critique- strengths and weaknesses of the work (kudos to NIH for forcing reviewers to actually mention the strengths). The work is discussed in a meeting with ~20 other scientists who have not read the proposal. And finally, it's given a score. Then, in large part using that score, the funding agency decides whether to fund the research or not. Odds of funding are around 10% (don't nitpick- this is close enough).

Most of the ones I read are from people like me- university professors. So, my question is this- is this really a good use of our time? And if not, is there a viable alternative, or is this "the best of all evils"? Let's think of how many person-hours are used in proposal preparation and proposal review. Some of it is definitely worthwhile- forcing the scientist to carefully plan his/ her research and think of what would be the highest impact rather than status quo. The presentations often help guide the subsequent publications in thoughtful ways. Similarly, reviewers benefit by getting a broader view of the field and what kinds of research people are doing.

But there are downsides, even beyond the time commitments of all the parties involved. We all know a few outstanding scientists who seem incapable of writing decent proposals for many different reasons. We also all know that there are some who are great "salespeople" but may be lacking on scientific follow-through or rigor. The present system strongly favors the latter over the former. There's also a general "risk aversion" that comes across in peer review, despite extensive efforts by funding agencies to prevent this. But, if only ~10% of proposed research will be funded (for reference, in a two-day grant review meeting, this may be ~6 proposals), reviewers sometimes have a hard time suggesting that something that "would be very cool but might not work" get funded over something that "would be kind-of cool but will for-sure work." Peer review is also far from perfect- many good proposals get sunk because a reviewer misunderstood something or had a different perspective on what is useful than the proposer. Finally, and frighteningly, university promotion and tenure decisions often seriously consider funding success- seeing it as a quick surrogate to how the community views one's research.

Scientific publishing has gone through major revolutions in the past decade, first with a push for "open-access", and more recently a decision that the merits of science should be sometimes judged historically (the essence of PLoS One): all work that is scientifically valid should be published and history will decide what was most useful. Obviously, public funds (remember- the public funds all this with taxes!) are more precious than whether something is published or not, so a direct analogy is not possible. But is there another way? Or is this really the best of all evils?