Friday, December 20, 2013

Grants and the measure of a scientist's 'worth'



All academic scientists worry about grants. In my weekly lunches with other faculty, that is often one of  the first points they raise as a source of concern. I anxiously await now a score (if any) from my most recent NIH submission. Some colleagues of mine reported either euphoria associated with their recent NSF funding or sadness with recent NSF declines. Part of the worry is about keeping the science going that we want to do. Part is about not laying off good employees. But sadly, a big part is "image" to the university and to peers. Administrations (and often peers) ask frequently about how big particular grants are and then implicitly or explicitly rank the faculty member based on that amount.

We use dollar amounts of grant funding to assess faculty all the time: for hires, yearly performance raises, tenure and promotion, etc. Part of this is reasonable-- grant funding is competitive, so in an ideal world, someone who has good, creative ideas for feasible, high-impact research should acquire grant funding more easily than someone who lacks such ideas. Further, virtually no research is "free"-- one needs salary for themselves and their labs. That research may be subsidized by the university (as part of our "9-month" salary and TA-ships for students), but it's still not free. Hence, we need money to sustain our research.

That said, many readers will agree we've become too grant-obsessed in our assessments at all levels. New faculty members are immediately dubbed "successful" or "hotshots" if they acquire funding early, whereas early publication of high-impact research in a big journal often has a lesser effect. I recall once (I'll be vague for confidentiality) when 2 assistant professors were up for tenure simultaneously. One had multiple papers in the most premier science journal and multiple others elsewhere, and with a consistent publication rate along his years, but had acquired very little funding. The other acquired federal funding early but didn't publish anything until they put out a small number of papers in the year they were up for tenure (and none is journals as prestigious as the first). The faculty tenure vote was more strongly favorable for the second than the first, citing "sustainability" as a concern for the first.

Let me use an analogy. Funding is like gas to make the engine of research run. However, comparing faculty based on grant dollars is like comparing two cars in how far they'll go based on how much gas is in their tank. Many scientists are like Toyota Prius plug-ins (beyond their interest in reducing emissions)-- yes, they need gas, but they can go very, very far with a small amount (~58mpg). Other scientists may be more like an 8-cylinder Chevrolet Camaro (~14mpg), or even a coach bus (~6mpg). There is even empirical evidence that very heavily funded labs, on average, produce less per dollar than mid-sized labs (ref).

Again, research isn't free, and sustainability is a concern, so we should not ignore funding. However, I will argue that IF we are to use grant dollars as part of a measure of evaluation, we should simultaneously consider that investigator's track record of "research efficiency per dollar" (like gas mileage). How many impactful new insights have they published per dollar? Shouldn't we be in awe of those who publish high impact work while using up less taxpayer money (and thus leave more for other research)? Shouldn't we consider research sustainability not just by how much you have, but how well you'll do in the inevitable situation in which you have little research funding? There are multiple ways to publish in PLoS Biology, Science, or Nature--  two are "scale" (you do something that's slightly creative but on a grander scale than anyone else has done before-- clearly expensive) and "innovation" (you come up with a truly creative idea and test it-- perhaps not expensively). It's time we spent more effort giving attention and reward to the latter of those two approaches, especially now when grant dollars are preciously limited.

Monday, December 2, 2013

Grades (What are they good for?)



Teachers (college or K-12) always complain about grading, and perhaps even more about student whining about grades (see this example). Biology professors, for example, often complain about students who intend to go into medicine being"obsessed" with grades. Given the challenges of the grade-awarding process, I've been reflecting on why we grade lately, and I welcome thoughts from all of you. Personally, I find that this question segues into a more fundamental question of the purpose of formal education.

Most teachers would quickly suggest that we give grades to assess student understanding of the material covered. Some students appreciate virtually all the nuances of the material (and thus get an "A"), others have a very basic understanding (perhaps getting a "C"), and still others fail to understand the material (grade "F"). The grade thus provides feedback to the student and to the institution about how well they grasped the material covered. Fair enough.

So, let me follow with another question-- why do teachers teach material to students? Presumably, it's because the material is worthwhile, and it is thus desirable for the student to learn it. If the purpose is for students to understand and appreciate the content, then an "F" indicates a failure not just of the student but of the teacher's purpose as well. If we desire students to learn something and they fail to do so, then both student and teacher roles have failed (irrespective of whose "fault" that failure was). In this regard, our system is counterproductive to its purpose in that, if one or more students fail to learn material covered, the response is to stick an "F" label on the student and simply move on. Given there may be numerous reasons the student failed to grasp the material (including bad timing or perhaps a teaching style that did not work well), why would we not let students take more opportunities to learn a given body of material, assuming learning the material is indeed valuable?

When we talk about "tests", we think of tests in schools with grades. Here's a different example-- a driver's license test. This test is worthwhile-- it provides training that may even save the life of the awardee and gives certification of their ability. There are no grades to it-- a student passes or fails to get the certification only. If they fail two times and later master the material to pass, there is no consequence of the original failed attempts, since they are irrelevant-- all that matters is the student has now mastered the valuable material.

Our "grade-obsessed" system has an entirely different purpose-- the stratification of students. This stratification may reflect effort or ability, though we can never be certain of the relative weighting of the two in the outcome. Some of the stratification may be arbitrary, too, as some students may have been ranked low directly as a result of having one particular teacher (whose teaching style did not work for them) and not another.

Coming back to the example of premedical students, it's again unquestionable that medical schools use grades as one of their most prominent criteria for admission (along with others, such as MCAT score, rigor of coursework, letters, etc). By awarding grades, undergraduate professors facilitate their stratification of applicants. I think it's safe to argue that, all else being held constant, every non-A reduces an applicant's probability of admission to top-tier medical schools, even if only slightly. The same truth holds for undergraduate admission-- all else being equal, every non-A in high school reduces the applicant's range of schools to which they may get accepted (and the associated financial aid). How can we blame students for seeming grade-obsessed when faced with this reality?

Basically, I think the current system focuses too heavily on innate ability and luck, and gives too little to people who are willing to strive hard but were incompletely successful in their first attempt, the latter of which I think is a big predictor of eventual success. I see no reason why, like driver's license tests, we don't let people try to re-learn and re-test, as those people may in the end actually understand the topic just as well or better, but have demonstrated perseverance. In fact, with the current system, there's frequently virtually no reward to going back and trying to understand better what you didn't understand in the first place-- totally contradictory to our stated goals.

I find these facts to be very disturbing. I did not enter the educational enterprise for the purpose of stratifying students-- I would prefer that students actually learn what I teach. Some colleges allow grades to be optional for some or many classes, but even some of the more famous examples people cite (e.g., Reed College) still record grades in the end.

Can the situation be fixed? I think any solution would involve a radical change in how education works. My first thought was that we'd follow the driver's license example and report specific competencies. For example, students in a transmission genetics course could get certified for competency in their understanding of meiosis, recombination, genetic mapping, heritability, Hardy Weinberg genotypes, etc. However, that approach merely moves the problem-- what if someone only grasps these concepts at the most basic level, and then moves on as though certified with full understanding/ competency?

Honestly, I think the solution (which itself has numerous problems-- see below) is to separate the process of teaching from that of assessment/ stratification. This solution may be more feasible now than in years past, given the growth of resources available electronically. We can have still assessments in classes, but they'd be more for the students to self-assess and not for permanent records. A student would finish any genetics class they like (live, online, self-taught from books, whatever), and when they feel they are adequately prepared, take a "for-the-record" assessment. These assessments may only be taken once every semester or once every year, so they can't just keep taking it weekly. However, students can retake the assessment after the waiting period, up to some maximum number of times (maybe 3-5).

What are strengths of this approach? For teachers, they focus on teaching and not on grades. They are no longer involved in the stratification process-- their only goal is to help students learn the material. Students would better accept that "we're on the same side" with respect to learning with such a change. Again, teachers should still provide extensive in-class assessments for students to practice, but the grades of those tests are informational only. For students, there are two large benefits. First, they can learn however they feel works best for them. Those who prefer live, standard classes can do those. Those who prefer online classes can take those. Second, it provides students with a "marketplace" of opportunities. Some teachers may be known to focus on particular subsets of the material (specialties or areas of research). They can learn those areas from those teachers, and go to other teachers to learn other specialties within the scope of the assessment.

The approach has major weaknesses, though. Students would spend a lot more time researching class options and outcomes rather than just taking "the genetics class offered this semester at my school." They may also be sick or upset on the day of the test and have to wait a year to repair a weak grade from a single test (though this may already be true for heavily weighted final exams). For teachers/ professors, they give up control of tests. Much as we complain about grading and grade complaints, I suspect we'll complain more about the standardized test not focusing on what we think is most relevant. We'll probably also get pressure from students (and administrators) to match course coverage to that of what's likely to be on the test, and professors will immediately scream that their academic freedom to teach whatever and however they like is being impinged upon. (K-12 teachers already encounter this issue with state scholastic requirements.) Finally, there's the question of who actually makes these tests. I don't see that this solution is feasible, honestly, as the negatives are huge.

Are we stuck with the current system, where teachers' roles often devolve to presentation, assessment, stratification, and moving on*? Or are there alternatives? I welcome feedback from readers.

* Footnote: I realize that many teachers do a lot more than "presentation", including but not limited to one-on-one mentoring of students outside the classroom, and including on material no longer being covered in class.

Sunday, November 3, 2013

Should societies facilitate making scientific meeting presentations publicly available?



The answer always sounds simple and obvious initially, yet it never seems to pan out that way, and the problems always come down to money and career advancement. The value of science comes not from doing experiments but from relaying the results and interpretations of completed studies. Science has no value while it sits in a lab notebook or on a laptop's hard drive-- it has value when others can see it, use it, test it, and expand upon it, so knowledge is shared. Shouldn't we make our results as publicly available as possible, and as early as possible? Further, our research was sponsored by federal funds ultimately deriving at least in part from taxes-- it was done on the backs of our friends and neighbors. How can we justify not maximizing its speed of impact?

You may think I'm talking about open- or free access to refereed scientific journal publications, or to deposition of scientific pre-prints to services like bioRxiv, arXiv, or Haldane's Sieve. The argument certainly can apply to both of those activities, but I'm actually talking about something different. I'm talking about scientific presentations at conferences.

I have a proposal that I'd like to raise with one of my scientific societies in a few weeks-- one that was discussed in an earlier blog entry. It sounds simple-- the society should facilitate recording (with permission from the presenters) contributed talk presentations and put them in YouTube or equivalent. This has been done successfully before for smaller meetings (see this conference). Let's also facilitate having presenters (again, with their consent/ permission) upload posters to a free online site (e.g., this one). What better way to maximize impact and speed of sharing? Anyone in the broader community can watch the talks and learn the exciting new ideas, potentially long before a final manuscript is ready, much less a refereed publication. And for conferences with multiple concurrent sessions, this allows the possibility for attendees to see talks they'd missed-- who wouldn't want this?

It's not contentious in some sense, since it'd all be voluntary. If you don't opt-in, it doesn't happen. If you don't want to share for whatever reason, then fine, don't.  In talking with several colleagues, though, it seems the idea of a society even facilitating this effort is at least potentially contentious. Many concerns were raised:

1) Would such presentations online constitute "publication" and thus prevent the author from being able to publish the final version of these results in a refereed journal? Most journals in my field would not be so ruthless, but some journals especially in biomedical areas, are ruthless (see this list of journals/ publishers and their policies onprepublication of manuscripts). From the publisher standpoint, they want the press associated with the findings (hence "money"), and they thus manipulate scientists by limiting their ability to translate the work into refereed journal publications (hence "career advancement"). The concerned ask-- by providing this service to facilitate recording, is the society potentially hurting some of its members?

2) Would such online presentations potentially open the facilitating society or conference organizers to damages or lawsuits if the presenters fail to consider copyrights? Many scientists are careless in grabbing images from Google image search (irrespective of copyright) or failing to get proper permissions to show figures from work published in non-open-access journals. If the society or conference organizers "sponsor" conference recordings, could they be sued (hence again, "money")? Or perhaps a junior scientist may make an off-hand negative comment about an established scientist's work, and be penalized in some way (hence again "career advancement" concerns)?

3) Would allowing a fraction of the conference to be posted online make it so fewer people elect to attend the conference in person, thereby devaluing the conference ("career advancement" for all participants) and potentially also causing the sponsoring societies to incur a financial loss ("money")?

4) Would ruthless others attempt to "scoop" results by copying the work or rushing a related experiment, particularly affecting junior scientists who generously opted to have their presentations shared ("career advancement")?

5) Would people see yet misuse the results without fully understanding the context the way the presenter does?

Again, a seemingly simple idea, but many complications. The discussion is worth having. I'll give my answers to the five criticisms above, though there are others I won't go into.

On #1-- I have two replies. First, it's rare at least in my subfield. Second, I think the community should take a strong stance of not publishing in journals that elect to have such ruthless behavior and demonstrate such an antagonism to scientific dissemination (their purported goal).

On #2-- I think there is reason to be cautious and to give guidance to presenters on proper attribution/ permissions. That said, I find it hard to believe that, short of airing clips of popular movies currently in theaters, there would ever be retribution sought more severe than a demand that something be removed from the internet in a specified timeframe (which can be done). On the junior scientist issue, that issue is potentially "better" if the content is taped, since he/she is likely to be correctly quoted rather than potentially misquoted as having said something worse than they actually did.

On #3-- The biggest benefit of conferences is not sitting passively in talks-- far from it (see this blog). I'd never skip a conference because a fraction of the talks were online. Further, I can't imagine, at least in the near-term, that more than 20% of a very large conference would opt-in given the concerns associated with #2 above and #4 below.

On #4-- This is an issue for every conference presentation, irrespective of whether online or not. People have cellphones with cameras, so if they are so ruthless, they can snap a picture of the poster. They can take detailed notes at talks. I totally acknowledge that it's a greater concern when the material is made so much more accessible (though it also potentially allows clear documentation of when they had presented this work and thus their primacy). So yes, this is a concern, and merits consideration by participants in whether to consent to taking this risk in exchange for sharing their results. But I think they should be allowed to make this decision for themselves rather than having the society paternalistically decide not to permit the option.

On #5-- People misinterpret the context of results from fully documented peer-reviewed journal publications all the time-- it's hardly a unique problem. This concern was raised repeatedly in the context of trying to stop the now-mandatory sharing of data used in published papers in my field. While I'm sure this happens, bluntly, I've always found this criticism as a reason for not sharing to be uncompelling, and condescending to one's colleagues.

But I invite readers of this blog to submit their own thoughts. Maybe I'm totally wrong in my responses, or maybe there's even zero interest in this as a service. Is offering the OPTION to record talks at a conference and make freely available a good idea? Same for submitting posters to an open repository? Do the concerns outweigh the benefits, and am I just being audacious by pushing this idea? Comment very welcome!

Friday, June 7, 2013

On "super-professors" and the MOOC pushback



As with all new things that receive a lot of (arguably "too much") positive press, the backlash necessarily ensues. So it is now with MOOCs. Early, simplistic pushback came in the form of noting the lack of the student-professor interaction possible in a live classroom (e.g., this NY Times article and associated NPR interview). As statistics came out on the early MOOCs, attention focused heavily on poor completion/ high "failure" rates (e.g., this Money magazine article). The most recent negativity is different and warrants more attention, highlighted recently in a letter by San Jose State University professors to a MOOC-teacher. Specifically, there's an assertion by several in academia now that professors (or, as negative writers like to hyperbolize, "super-professors") who teach MOOC courses are providing a tool that reckless universities are using to dismantle departments, reduce costs by hiring cheaper, non-expert teachers, etc. I find this last criticism the most troubling, since it's leveled by colleagues and possibly friends rather than random writers (e.g., this blog, or the rather scathing comments at the bottom of this blog).

I'll present my thoughts on each of these, in turn. Honestly, I think the first two arguments display naiveté of how MOOCs are used by the majority of users, but the last merits extensive discussion and continuing dialogue-- I think now that both sides (including mine) had overly hasty responses before.

On fewer interactions and/ or lower quality of MOOCs relative to regular college classes: My initial reaction to this was a mere, "Duh!"  Of course there's more potential for student-professor interaction in in-person classrooms and campuses than via MOOC. I'd be surprised if anyone thinks taking a set of MOOCs is equal to "having a college education." I've certainly not heard any professor say that the MOOC version of their class was "equivalent" to interacting with them directly. I also have not heard of thousands of high school students who previously planned to go to college deciding now that they don't need to and will just take some MOOCs. This  latter non-effect is punctuated by noting that most MOOC students are older, and many already have college degrees.

On the high "failure" rates: This is partially the fault of MOOC-participants (venues, faculty, and colleges) "overselling" enrollments. For example, my first MOOC iteration peaked at 33,000 enrolled students, yet something like 2000 got certificates of completion. These numbers are totally deceptive-- the first is fictitiously high, and the last is fictitiously low. The "33,000" is the number of people who clicked a link to "enroll". When you "enroll" in a college class, you commit funds and time to doing the entire thing, and you anticipate a grade that will help or hurt your probability of subsequent advancement. For MOOCs, you may "enroll" just to see the full syllabus-- it's more an expression of potential interest than a commitment in any way. Given you may enroll months before the MOOC even comes online, it may be just a way of being added to the e-mail list about the class to possibly peek in on it later.

Related to both of the above criticisms-- MOOCs present college class material, but MOOC viewers, by and large, are often just curious people, not people choosing it as an alternative to college. They want to peek at what's offered, perhaps not with much more thought or intention than when you change the channel to see what's showing on TNT. Do you "fail" a MOOC if you don't watch all the videos or complete the assessments? Only in the same way you "fail" Law and Order on TNT if you end up turning it off after 40 minutes. I'll give a few personal examples-- I signed up for two MOOCs over the past few months: Useful Genetics and Irrational Behavior. I signed up for Useful Genetics just because I wanted to see the coverage and style in which it was taught. I watched one complete video, a few pieces of others, and a few Discussion Forum threads. That was all I wanted out of the class-- did I "fail" it? Technically yes, I was enrolled and did not complete the assessments, but in reality, I was neither "truly" a student in it nor did I fail it. For Irrational Behavior, I did one assessment and watched the first three weeks of videos, but I got busy after that. The class is over now, but I downloaded all the remaining videos and intend to watch (many of) them sometime later. Again, did I "fail" it? You decide.

On MOOCs dismantling college classes: Perhaps I was naive myself, but I never imagined this would arise as a criticism of professors offering MOOCs. As discussed in a prior blog entry, the main reason I embarked upon a MOOC was to motivate myself to record videos in order to offer my on-campus students a "flipped class" experience. This type of format is normal for humanities (presumably few or no literature professors spend entire periods reading Shakespeare to enrolled students), but they have traditionally been less common for introductory science courses. My impression, as discussed in another blog entry, was that the flipped class format, leveraging the MOOC for videos, was an asset for my students. Indeed, I naively thought other faculty may "want" to use a few of my MOOC videos (or any others) rather than go to the trouble of creating their own. Keep in mind I spent literally hundreds of hours preparing the MOOC-- why duplicate efforts?

I won't speak on the San Jose State University incident, since all of it is second-hand to me. I will say that I think college administrations are doing their students a disservice if they do any of the following things:
1) replace knowledgeable, long-term faculty with a MOOC
2) replace knowledgeable, long-term faculty with a MOOC+TA (or MOOC+short-term faculty, or MOOC+ out-of-field, long-term faculty)
3) overburden faculty by asking them to teach more courses and pretending it's less effort since the courses can be associated with MOOCs
4) in any way "dictate" to faculty that they must teach with specific formats.
Basically, I think MOOCs should be available for any existing faculty to utilize if they see fit, but in no way "forced" upon faculty or as an excuse for dismantling departments from maintaining expert, long-term faculty. I think all parties agree on these points.

Now, if administrations dismantle departments, is it also the fault of the MOOC provider or MOOC instructor? Therein lies the point of potential disagreement. My word-choice was poor (and perspective initially quite naive) in my quote in the Chronicle's blog: I do bear responsibility for what happens with what I put out on the web. If I put a recipe for how to make agent orange on the web, and a kid gets sick or hurts others with it, I certainly bear blame. The analogy others used about atomic bombs also has some merit... if I had put out something that destructive.

But, let's explore these analogies all the way then. Is there a "good" use for agent orange? Do atomic bombs help most people? All the critics casually avoid the single biggest flaw in their arguments and analogies: millions of people around the world use MOOCs in very constructive ways to advance their knowledge and understanding. In contrast, to my knowledge, not one university has made any alteration to their faculty structure or imposed anything upon their faculty in response to my MOOC, yet thousands of students worldwide have completed it. Some of these students who completed the MOOC report changes in career direction or world perspective directly in response to this educational opportunity (e.g., this report, as well as MANY that have e-mailed me directly or written reviews of the course online). Somehow, these positive effects get swept under the rug by MOOC critics with their sweeping castigations and ridiculous analogies to atomic bombs or other weapons.

Further, MOOC critics often dance around the desired outcome of their critiques-- a sign the criticism may not be fully defensible. If MOOCs have such a net negative effect on the world and MOOC professors (or so-called "super-professors") are complicit, why aren't the critics being explicit in telling us, "Stop offering this free education to the world." I could continue to offer a flipped class to my Duke students with the videos I've already made, but stop offering the videos to anyone outside of Duke University (or distribute them myself directly only to colleagues who swear to never share their existence with administrators). What of the MOOC students? Watch this video to meet a few. Then, tell me-- why should I deprive Richard the train-driver in Sheffield (timestamp 12:55) from pursuing his interest? What about Aline, the high school student in El Salvador (timestamp 32:36) whose school never teaches genetics or evolution? Getting a "true" or "full" college education is not an option for them right now, so they should get... nothing? Or maybe throw them a few links on the web and tell them to go to college? And what about the millions of other Richards and Alines? Would critics really suggest that, because some university administrations are reckless in their actions, millions of people worldwide who are benefiting from MOOCs (and using them in the purest academic sense) should not get this resource any longer?

I really do sympathize with the concerned professors (to whom I'm being somewhat rude by referring to them merely as "critics" here). Faculty jobs are in jeopardy, including faculty that are doing a better job than their MOOC replacement. Some colleges are making decisions that are financially positive but pedagogically terrible. If the concerned professors have specific suggestions for me (rather than accusations), I'm eager to listen. I'm quite willing to write letters to deans or provosts about the importance of MOOCs in general and mine in particular NOT being used as substitutes for courses or mandates for faculty. I'm happy to ask both my institution and Coursera to help with these efforts as well. I'm happy to even bring the topic up with the scientific societies to which I belong and hold leadership roles (e.g., Society for the Study of Evolution, Genetics Society of America, American Genetic Association), to see if perhaps we could write something on behalf of the societies or take other constructive actions.

Nonetheless, some of the residual criticism seems to be coming primarily because the early MOOC professors 1) tend to be from so-called "elite" universities and 2) received a lot of positive press. I won't deny these facts-- I am fortunate to be employed at what is perceived to be an elite university. It's true that many of the public perceive MOOCs as good because they are offered by these elite universities, and I understand that faculty at colleges not-perceived-as-elite find the availability of my course (and other elite-university MOOCs) potentially threatening as a result. I've also received extensive positive publicity for my MOOC efforts (as well as the recent negative publicity I mentioned above), though I stress that, contrary to many insinuations, I've neither received one penny in compensation from my MOOC nor any reduction in my on-campus teaching load. I fully acknowledge that there are many more effective and engaging teachers than me or other MOOC professors in many colleges that are not offering MOOCs, and I'd tell anyone debating between having a class taught by an in-field professor (irrespective of college) vs. my MOOC (with or without a TA) to do the former. It's ludicrous to say that a course is "better" solely because it's offered from a Duke or Harvard than a San Jose State or Western West Virginia Community College. I was a student at the College of William and Mary (certainly not Ivy League), and I'm very proud of the education I received there. Rather than further castigating MOOCs and MOOC faculty, perhaps we can work together to fight misperceptions, given we all agree they're untrue?

Now is the time for us to balance the (over)exuberance from early MOOC publicity with the concerns raised about MOOC misuse. I ask that my critical colleagues avoid gross hyperbole like suggesting that my words or actions show "gross indifference to the welfare of nearly everyone else in their profession". In return, I promise to be more aware of the misuse problem and to engage with faculty in how we might mitigate such misuses of MOOCs by reckless college administrations. But, I ask that further discussion of negative effects be balanced-- even if those not fond of (or negatively affected by) MOOCs don't rave about the benefits of MOOCs, at least don't insult me and the millions of MOOC students out there by pretending those benefits don't exist or that the MOOC professors are solely delivering online courses for money, glory, or the title of "super-professor."