I just finished a full draft of my book-- it's basically a "how-to" guide for new academic faculty on their broad yet poorly defined jobs. A lot of the posts in my blog here have been inspired by working on particular associated chapters (e.g., "Teaching", "Managing Others"). It's been a really fun project putting this all together, and I've had stimulating conversations & exchanges with many people about some of the issues raised. As I am about to send the draft off to the publisher (Sinauer) for review, I have to think-- how many people will buy this? Should I have just put it all as a pdf on the web?
The underlying question is broader and relates to how we disseminate knowledge and ideas in science today. I would have been OK with just putting the entire treatise on my website, but my assumption is that very few people would have ever downloaded and read it. My thought was that sending it to a book publisher gives it a "seal of approval" of sorts, demonstrating that someone else thought this collection of ideas was worthwhile. Indeed, Andy Sinauer will certainly send it to reviewers, just as he did with the initial book proposal, and I'm sure I'll be asked to revise based on their feedback. The downside to getting this seal of approval is simple-- rather than being able to get these ideas for free, anyone interested would then have to pay money, hence reimbursing Andy for costs and hopefully providing a small profit. Both e-copies and hard copies will be available, and although both will certainly be cheap (assuming the book passes peer review), neither will be free. Hence, my assumption is that more people will read the contribution when they have to pay for it than if they downloaded it for free on my website. Yes, I'm ignoring the obvious fact that Sinauer will likely do more marketing of the contribution than I would have if it were merely posted on my website.
The analogy to scientific publication is obvious-- I can post results of scientific experiments on my website, but no one would pay them any mind relative to whether I published them in an outlet requiring both peer-review and expense (by me if open access, by subscribers/ universities otherwise). Thus, we put great emphasis on peer review, to the extent that we don't feel contributions without it have much value. The irony is that we all complain incessantly about the rigor of peer review-- how poor products slip into top journals while excellent contributions are delayed or rejected because of unreasonably high expectations by reviewers.
But are the times changing? Blogs, tweets, etc., abound, and PLoS One changed the face of science publishing by removing the emphasis from reviewers' assessment of value to only their assessment of execution and description. What if scientific results and their associated discussions were posted in bulletin-board formats? Is the world ready for such as a means of dissemination and acceptance, or would such results be presumed to be overstated and/ or flawed? There's already so much assumption of honesty in science (e.g., we don't demand that results be replicated by other teams before publication)-- is this really such a big leap? Indeed, arXiv already publishes unrefereed works for math and physics. Many journals also already allow commenting on studies-- maybe this will eventually morph into a Yelp-like review system of contributions:
-From Jones: 5-stars to the Smith lab for output #42431-- Really loved the rigor of their demonstration of speciation by reinforcement in Drosophila ananassae. The angle with looking at relative abundance of the two species was an excellent addition and sealed the result.
-From Clark: 3-stars to the Smith lab for output #42431-- Loved the study's execution, but they failed to cite two other papers that used the same approaches.
As always, comments very welcome. And please do wish me luck with the book-- I'm hoping the reviewers don't trash it because they disagree with specifics I suggested but instead see it as it (and this blog) is intended: "a starting point for discussion."
As is usually the case, it seems like technology and culture are outpacing long-standing institutions like academia. One of the major problems with peer review is that of two or three reviewers (for a grant or manuscript), it only requires one to tank your chances at funding or publication. No one discusses the conflict of interest inherent in the peer review system. The scientists that truly are your peers in any given area may be performing the same experiments and competing for the same research dollars or time in the publication spotlight. In a completely honest system, this should not be a problem and all work would be judged based on its rigor and innovation. In reality, your manuscript can be rejected for publication by a reviewer who will publish the same experiments a month later. What does it say about this process that most journals ask authors to identify individuals who should not review their work?
ReplyDeleteMoving to a Yelp-like system would help to reduce this conflict of interest by the sheer number of individual reviews. Reviews would also not be anonymous, so if you get a 1-star review (out of 5) from a direct competitor but your average rating by all reviewers is 4-stars, then that 1-star rating will reflect more poorly on the other scientist for being a bad sport. Bias could plague this system where individuals with very strong positive or negative feelings are overrepresented in the reviews posted. A particular set of experiments could receive 3-stars if all readers were polled but the average could tend more toward 1-star or 5-stars if most people do not post a review. And there would need to be a process where reviewers could request key experiments or controls to be added (a strength of peer-review).
I, for one, would welcome any change to the system that would make science more collaborative than competitive, with a more free exchange of information and where careers depend more on science than research dollars earned.