Friday, December 20, 2013

Grants and the measure of a scientist's 'worth'



All academic scientists worry about grants. In my weekly lunches with other faculty, that is often one of  the first points they raise as a source of concern. I anxiously await now a score (if any) from my most recent NIH submission. Some colleagues of mine reported either euphoria associated with their recent NSF funding or sadness with recent NSF declines. Part of the worry is about keeping the science going that we want to do. Part is about not laying off good employees. But sadly, a big part is "image" to the university and to peers. Administrations (and often peers) ask frequently about how big particular grants are and then implicitly or explicitly rank the faculty member based on that amount.

We use dollar amounts of grant funding to assess faculty all the time: for hires, yearly performance raises, tenure and promotion, etc. Part of this is reasonable-- grant funding is competitive, so in an ideal world, someone who has good, creative ideas for feasible, high-impact research should acquire grant funding more easily than someone who lacks such ideas. Further, virtually no research is "free"-- one needs salary for themselves and their labs. That research may be subsidized by the university (as part of our "9-month" salary and TA-ships for students), but it's still not free. Hence, we need money to sustain our research.

That said, many readers will agree we've become too grant-obsessed in our assessments at all levels. New faculty members are immediately dubbed "successful" or "hotshots" if they acquire funding early, whereas early publication of high-impact research in a big journal often has a lesser effect. I recall once (I'll be vague for confidentiality) when 2 assistant professors were up for tenure simultaneously. One had multiple papers in the most premier science journal and multiple others elsewhere, and with a consistent publication rate along his years, but had acquired very little funding. The other acquired federal funding early but didn't publish anything until they put out a small number of papers in the year they were up for tenure (and none is journals as prestigious as the first). The faculty tenure vote was more strongly favorable for the second than the first, citing "sustainability" as a concern for the first.

Let me use an analogy. Funding is like gas to make the engine of research run. However, comparing faculty based on grant dollars is like comparing two cars in how far they'll go based on how much gas is in their tank. Many scientists are like Toyota Prius plug-ins (beyond their interest in reducing emissions)-- yes, they need gas, but they can go very, very far with a small amount (~58mpg). Other scientists may be more like an 8-cylinder Chevrolet Camaro (~14mpg), or even a coach bus (~6mpg). There is even empirical evidence that very heavily funded labs, on average, produce less per dollar than mid-sized labs (ref).

Again, research isn't free, and sustainability is a concern, so we should not ignore funding. However, I will argue that IF we are to use grant dollars as part of a measure of evaluation, we should simultaneously consider that investigator's track record of "research efficiency per dollar" (like gas mileage). How many impactful new insights have they published per dollar? Shouldn't we be in awe of those who publish high impact work while using up less taxpayer money (and thus leave more for other research)? Shouldn't we consider research sustainability not just by how much you have, but how well you'll do in the inevitable situation in which you have little research funding? There are multiple ways to publish in PLoS Biology, Science, or Nature--  two are "scale" (you do something that's slightly creative but on a grander scale than anyone else has done before-- clearly expensive) and "innovation" (you come up with a truly creative idea and test it-- perhaps not expensively). It's time we spent more effort giving attention and reward to the latter of those two approaches, especially now when grant dollars are preciously limited.

Monday, December 2, 2013

Grades (What are they good for?)



Teachers (college or K-12) always complain about grading, and perhaps even more about student whining about grades (see this example). Biology professors, for example, often complain about students who intend to go into medicine being"obsessed" with grades. Given the challenges of the grade-awarding process, I've been reflecting on why we grade lately, and I welcome thoughts from all of you. Personally, I find that this question segues into a more fundamental question of the purpose of formal education.

Most teachers would quickly suggest that we give grades to assess student understanding of the material covered. Some students appreciate virtually all the nuances of the material (and thus get an "A"), others have a very basic understanding (perhaps getting a "C"), and still others fail to understand the material (grade "F"). The grade thus provides feedback to the student and to the institution about how well they grasped the material covered. Fair enough.

So, let me follow with another question-- why do teachers teach material to students? Presumably, it's because the material is worthwhile, and it is thus desirable for the student to learn it. If the purpose is for students to understand and appreciate the content, then an "F" indicates a failure not just of the student but of the teacher's purpose as well. If we desire students to learn something and they fail to do so, then both student and teacher roles have failed (irrespective of whose "fault" that failure was). In this regard, our system is counterproductive to its purpose in that, if one or more students fail to learn material covered, the response is to stick an "F" label on the student and simply move on. Given there may be numerous reasons the student failed to grasp the material (including bad timing or perhaps a teaching style that did not work well), why would we not let students take more opportunities to learn a given body of material, assuming learning the material is indeed valuable?

When we talk about "tests", we think of tests in schools with grades. Here's a different example-- a driver's license test. This test is worthwhile-- it provides training that may even save the life of the awardee and gives certification of their ability. There are no grades to it-- a student passes or fails to get the certification only. If they fail two times and later master the material to pass, there is no consequence of the original failed attempts, since they are irrelevant-- all that matters is the student has now mastered the valuable material.

Our "grade-obsessed" system has an entirely different purpose-- the stratification of students. This stratification may reflect effort or ability, though we can never be certain of the relative weighting of the two in the outcome. Some of the stratification may be arbitrary, too, as some students may have been ranked low directly as a result of having one particular teacher (whose teaching style did not work for them) and not another.

Coming back to the example of premedical students, it's again unquestionable that medical schools use grades as one of their most prominent criteria for admission (along with others, such as MCAT score, rigor of coursework, letters, etc). By awarding grades, undergraduate professors facilitate their stratification of applicants. I think it's safe to argue that, all else being held constant, every non-A reduces an applicant's probability of admission to top-tier medical schools, even if only slightly. The same truth holds for undergraduate admission-- all else being equal, every non-A in high school reduces the applicant's range of schools to which they may get accepted (and the associated financial aid). How can we blame students for seeming grade-obsessed when faced with this reality?

Basically, I think the current system focuses too heavily on innate ability and luck, and gives too little to people who are willing to strive hard but were incompletely successful in their first attempt, the latter of which I think is a big predictor of eventual success. I see no reason why, like driver's license tests, we don't let people try to re-learn and re-test, as those people may in the end actually understand the topic just as well or better, but have demonstrated perseverance. In fact, with the current system, there's frequently virtually no reward to going back and trying to understand better what you didn't understand in the first place-- totally contradictory to our stated goals.

I find these facts to be very disturbing. I did not enter the educational enterprise for the purpose of stratifying students-- I would prefer that students actually learn what I teach. Some colleges allow grades to be optional for some or many classes, but even some of the more famous examples people cite (e.g., Reed College) still record grades in the end.

Can the situation be fixed? I think any solution would involve a radical change in how education works. My first thought was that we'd follow the driver's license example and report specific competencies. For example, students in a transmission genetics course could get certified for competency in their understanding of meiosis, recombination, genetic mapping, heritability, Hardy Weinberg genotypes, etc. However, that approach merely moves the problem-- what if someone only grasps these concepts at the most basic level, and then moves on as though certified with full understanding/ competency?

Honestly, I think the solution (which itself has numerous problems-- see below) is to separate the process of teaching from that of assessment/ stratification. This solution may be more feasible now than in years past, given the growth of resources available electronically. We can have still assessments in classes, but they'd be more for the students to self-assess and not for permanent records. A student would finish any genetics class they like (live, online, self-taught from books, whatever), and when they feel they are adequately prepared, take a "for-the-record" assessment. These assessments may only be taken once every semester or once every year, so they can't just keep taking it weekly. However, students can retake the assessment after the waiting period, up to some maximum number of times (maybe 3-5).

What are strengths of this approach? For teachers, they focus on teaching and not on grades. They are no longer involved in the stratification process-- their only goal is to help students learn the material. Students would better accept that "we're on the same side" with respect to learning with such a change. Again, teachers should still provide extensive in-class assessments for students to practice, but the grades of those tests are informational only. For students, there are two large benefits. First, they can learn however they feel works best for them. Those who prefer live, standard classes can do those. Those who prefer online classes can take those. Second, it provides students with a "marketplace" of opportunities. Some teachers may be known to focus on particular subsets of the material (specialties or areas of research). They can learn those areas from those teachers, and go to other teachers to learn other specialties within the scope of the assessment.

The approach has major weaknesses, though. Students would spend a lot more time researching class options and outcomes rather than just taking "the genetics class offered this semester at my school." They may also be sick or upset on the day of the test and have to wait a year to repair a weak grade from a single test (though this may already be true for heavily weighted final exams). For teachers/ professors, they give up control of tests. Much as we complain about grading and grade complaints, I suspect we'll complain more about the standardized test not focusing on what we think is most relevant. We'll probably also get pressure from students (and administrators) to match course coverage to that of what's likely to be on the test, and professors will immediately scream that their academic freedom to teach whatever and however they like is being impinged upon. (K-12 teachers already encounter this issue with state scholastic requirements.) Finally, there's the question of who actually makes these tests. I don't see that this solution is feasible, honestly, as the negatives are huge.

Are we stuck with the current system, where teachers' roles often devolve to presentation, assessment, stratification, and moving on*? Or are there alternatives? I welcome feedback from readers.

* Footnote: I realize that many teachers do a lot more than "presentation", including but not limited to one-on-one mentoring of students outside the classroom, and including on material no longer being covered in class.