Thursday, March 17, 2016

Where does scientific society money go, and the future of open-access


There's been a bit of a storm on Twitter recently about PLoS, open access, and scientific societies. One underlying issue seems to be what the "return on investment" is from publishing in different venues.


I've held leadership positions in 3 major societies over the past 5 years (Society for the Study of Evolution, Genetics Society of America, and American Genetic Association), so I re-examined the reports I got from those societies to compare/ contrast. I won't present exact numbers here, since that's inappropriate and since I'm looking at a single year's report (2012 or 2014) for each society rather than a multi-year running average.

All three societies were (at least as of those reports) financially quite healthy, each with reserves well over $1 million that produce some investment income. All three, at least in those years, were saving more than spending. Much of society yearly income in all cases was from their flagship journal (and specifically institutional subscriptions of that journal). I should add-- while colleagues are often critical of price gouging by publishers like Wiley-Blackwell, well over half of the proceeds from publication may go to the scientific societies, so we scientists & scientific societies are the gougers. (The publishers are just our hit-men.) The annual conferences produced a very small net profit for two societies (SSE, and GSA, though the latter sometimes loses money in some specific organismal meetings in specific years) but a net loss for the third (AGA, which keeps an outlay for the conference in its yearly budget).

What do they spend their journal income on? Clearly some of it is used to keep publishing cheap for scientists. All societies have some other shared expenses like stipends for some editorial (or occasionally officer) positions, travel for the board/ council members, etc. In the year I examined, SSE spent a lot on student awards (over $100K on students alone) as well as some on other awards, various educational initiatives and annual meeting symposia. More recently, SSE has taken on large expenditures for funding outreach efforts and a workshop for trainees on preparing for diverse careers. GSA maintains multiple staff (some shared with other societies) so most of the expenses went there, but the staff work on their journals, spearhead education & outreach efforts, communications & policy activities, etc. GSA also gave out various awards both to distinguished established researchers as well as trainees, as well as funds for the diverse expenses mentioned above (e.g., board travel). AGA spends a ton on "special event awards" which fund workshops and symposia at other meetings, and also spends on its own annual conference. AGA also more recently started sizable "Ecological, Evolutionary and Conservation Genomics (EECG) Awards" for graduate students and postdocs.

The flagship journal from each of these three societies (Evolution, Genetics, and Journal of Heredity) is not open-access, but an open-access option can be purchased for a particular submission at the time of acceptance. Uptake on that is low-- I think under 20%. Each journal makes or will make their old issues freely available online (Evolution does not currently but will do this for issues >2 years old beginning in 2017). Publication costs are moderate (Genetics) or even waived for short papers in these journals (Evolution for members, J Heredity for everyone).

Here's what these journals and societies do now-- they get large sums of money from institutional journal subscriptions (which cost $500-$1200 annually) and use those funds both to make publication cheap to the scientific community and to provide several services (including grants) for the scientific community. Libraries pay in order to grant access to their communities. What this means, by definition, is that access is not "free", and if a reader does not use one of these libraries (e.g., they do not work for a big university), then they have to wait perhaps 2 years to see this published science or have to pay a one-time use fee.

The open-access movement argues this setup is unacceptable. Federal grant dollars often support the research (let's focus on federally funded USA-based scientists and USA-based readers for simplicity here), so as a citizen who "paid" for the research with their tax dollars does not get instant free access to the results of that research, but has to either wait 6 months to 2 years (depending on journal) or pay again to see sooner. Even that is not exactly true always, given the growth of preprint servers like arXiv or bioRxiv-- the core science may be available immediately and free, but the edits from the peer review process and copy-editing are not.

Could these society journals go full open-access? Sure, but it's non-trivial since the funding currently garnered from libraries would be cut. With such a great drop in income, there would be two choices: raise prices elsewhere (by hefty open-access fees for submissions and/ or making conferences more expensive so they are profit-generating) and/ or cut services (i.e., reducing student grants, education/ outreach activities, and/ or funding workshops for trainees). Some institutional libraries may still also support open-access publishing (e.g., my institution provides such funds), but these funds are often limited and run out (e.g., as of when I'm writing this in March 2016, my institution's open-access funds for fiscal year 2015-2016 are already depleted).

How does PLoS manage? PLoS doesn't provide all the same services to the community that a scientific society does. I say this not as a criticism of PLoS-- they function solely for publication (and do it extremely well) whereas societies now function for facilitating publication while also providing diverse other services. I'm thrilled PLoS is running a profit now-- they are providing a valuable vehicle for publication and have been leaders in the move toward openness in science. They are also a business, so they need to run a profit-- again, our scientific societies are all running profits, too.

Basically, we have set up a situation where any major move to open-access by a profitable society journal bears risk of a major reduction in other services to which the community has grown accustomed. Is this risk worth it? Here, I will go out on a limb and give a personal opinion rather than just an "option." I would argue yes with a caveat. I feel that societies have no greater obligation than to maximize the dissemination of science, even if that means accelerating access by 1-2 years. Giving out student grants or funding workshops & outreach events-- those are all very valuable, but their impact is typically limited to direct participants. The "potential" impact of accelerating access to all the literature (both inside and outside the scientific community) seems greater to me. Making science available immediately is the democratization of knowledge. Further, moving to open-access removes the community's reliance on libraries, which has always been a source of financial uncertainty-- it provides the opportunity to go to what I think may be a more stable model, even if less lucrative. The caveat I would add is that there would have to be support for publication costs for scientists with limited grants... I would hate for science to not be able to be published in X journal because the research team did not have (sufficient) funding. I realize the consequence may be some reduction of other society services (and certainly I would not advocate those going to zero), but I think the positives outweigh the negatives.

I realize not all will agree with my opinion above, and I respect that the other side is quite valid. I also do NOT argue that a move by society journals to open-access be taken hastily-- clearly, we need to assess the potential financial consequences and be prepared to react if there will be some losses. Nonetheless, I hope this stream of consciousness is useful or at least thought-provoking. Please feel free to disagree, but please be nice.

Sunday, February 21, 2016

Dear Reviewer 2...

Dear Reviewer 2:
I thank you for the review of my manuscript, though it was rather brief. I understand you think this "isn't the strength of advance necessary to publish" in the journal to which I submitted, but why did you take 6 weeks to decide this when the advance was described in the abstract, which you saw before agreeing to review? I would have been happier with more feedback on why you think this is "obvious" since I could not find other papers saying it, at least in this context. Could you please point me to some examples? With more detail, I could have discussed this with the grad student first-author, when she sat across the table from me with eyes puffy, red, and watery and tried to think about whether academia is a good fit for her in the long-term after receiving such a snippy review. I guess you were busy.

Dear Reviewer 2:
I thank you for the review of my manuscript. I appreciate that you think my conclusion is not yet rock-solid, and there are more experiments to be done. I agree-- I said this myself in the manuscript, too. On the other hand, this already took 3 years of work by multiple people, and the inference is quite strong, far more likely than the alternatives. I did send this to a discipline-specific journal rather than a broad journal, too. Might there be some value to disseminating this idea and the support garnered so far now (as well as appropriate qualifiers), so others can learn of it and think about it? My grant is up for renewal, my collaborator is up for tenure, and the postdoc who did the work is on the job market. The next experiment that you say is needed requires another 3+ years and hundreds of thousands of dollars, and it may not even work. I guess none of that matters more than only 100% rock-solid interpretations getting published.

Dear Reviewer 2:
I thank you for the review of my grant proposal. I see you have lots of concerns. You say you think my approach won't work, but people use it all the time for studies like this-- I cite many such papers in the Methods. I thought I listed the potential outcomes and interpretations, too, but you didn't comment on that table-- did you see it? You also note that it's unclear how well results in my system will extend to other systems. Is that not true for any experimental study in any system? Isn't that the value of having many studies on any phenomenon, and then doing a meta-analysis later? I wish you had elaborated on these facets before submitting your review--  I could then relay them to the people I'm laying off so they know why they're losing their jobs.

Dear Reviewer 2:
I thank you for the review of my grant proposal. Your comment that the advances from it would be "incremental" stung a little. No one has tested this hypothesis before, ever. What hurt even more was that you described why it's incremental in such vague terms-- did you actually read what I proposed, or did you write your review based on the Aims page? Those things you said are "well known" are indeed well known, but that's not what I proposed to examine-- I was digging much deeper assuming those to be true. You also criticized my proposed data analyses as "simplistic" but did not elaborate what you mean-- are they failing to test the hypothesis merely because they are simple? I worked on this proposal for months, even missing out on a vacation with my wife & kids, but I confess, I feel like you flipped through the proposal in minutes before writing this review. I guess I'll miss the next vacation, too, as I try prepare a new proposal and divine what went wrong with the last one.

Dear Reviewer 2:
I know you feel you are doing your duty to improve science. I know reviewing is a thankless task and takes away from more visible progress, like doing your own science. I know other reviewer 2's have stabbed you from behind their cloaks of anonymity, and perhaps you even think one of them was me (and you may be right). I know you think your science is better than how it's evaluated and better than some science that gets published/ funded. Perhaps you think that your lab's science is better than mine. Maybe. But please know this. Some of my results may seem "obvious" to you, and some may still be incomplete/ in-progress. I will try some risky projects, and some of my results may not extrapolate. I'm merely human, just like you. And I have been reviewer 2. But I'm going to try to remember all of these experiences so as to try to not be "just like you" in the review process. In that regard, I thank you for reminding me of the human face behind scientific products.


(Note: These are all based on experiences over many years, not very recent events.)

Wednesday, October 15, 2014

When do we REALLY have too many PhDs, and what then?

Recently, NPR ran a suite of stories (sample1, sample2) on biomedical PhDs either leaving academia or not having academic jobs available to them. I teach a class to entering PhD students, and many of them expressed concern about both the general fear being propagated by these stories as well as fear regarding their own  prospects post-PhD.

Meanwhile everyone with or pursuing a PhD has an opinion on the value of a PhD, whether there really is a crisis of too many PhDs, etc. Some describe a virtual pyramid/ ponzi scheme that's existed for decades, wherein growth in number of scientists in general or PhDs in particular is unsustainable. Others persistently argue that "The skills ... are useful in many professions, and our society needs to be more, not less, scientifically and technically literate." Still others say that the system should continue to take in many PhDs since we never know which will be the most successful. (I suppose there's an implicit "... and to hell with the losers who don't get the jobs they anticipated" in this last point of view.)

Elements of truth exist in each, so let's start with facts, then statements of the problem, and finally possible paths forward.

1) FACT-- right now, most science PhDs will not get tenure-track university jobs (for whatever reason). The figure I see often for biomedical sciences is ~15% nationwide. It's easy for elite universities to assume that THEIR graduates will fall into that fortunate 15%. We recently collected data for Biology PhDs from Duke University, specifically seeking ones that were not in postdoctoral stints. For our program, indeed, the fraction of non-postdocs in tenure-track jobs was around 50% over the last decade. Of the rest, a subset were in non-tenure-track jobs, and the rest had diverse careers-- research scientists, editors, writers, scientific sales, etc. Hence, even from elite private research I universities, a large fraction of the science PhDs will not get tenure-track university jobs (for whatever reason).

2) FACT-- research faculty have an intrinsic conflict-of-interest in considering whether there are "too many" PhDs. We faculty are judged by our productivity, which is partially (wholly?) associated with the talented trainees we bring into our labs. Turning down the tap of incoming PhD students could reduce our potential productivity, and thus our research program & career advancement prospects.

3) ARGUMENT-- there must be SOME hypothetical "maximum" number of PhDs after which point there are substantially more PhDs than necessary to fill jobs requiring that level of training. One of the points used against this argument is a general "more education is good". Sure, if time & money were no object, then yes, perhaps everyone could benefit from a PhD. Similarly, some sell the skills that one gets in a PhD as "useful in almost any profession." This is clearly true in a trivial sense--  pursuing a PhD is more valuable than sitting at home and watching Netflix for 6 years. But for someone who'll be either a base-level staff scientist or general administrator (or perhaps leaving science entirely), does it make more sense to get a PhD, or to spend 2 years on a masters degree and 4 years advancing their experience (and stature) within their chosen career that does not require a PhD?

So, with the above points in mind, I ask the first question. When do we REALLY have too many PhDs? Have we already passed that point? Are we coming close?

Here's some data (albeit crude) from a few years ago separated by country:
http://www.nature.com/news/2011/110420/full/472276a.html
Note, for the US, they mention specifically "Figures suggest that more doctorates are taking jobs that do not require a PhD."

Some faculty toss out the word "industry" as a solution to the small number of tenure-track positions available, as though companies are struggling to get PhDs and these are easy jobs to snag (and as though these faculty even know what "industry" means). While some areas may be booming, many biomedical industry jobs too are quite competitive, and some PhDs who TRY to go into industry have difficulty getting in the door.

With that in mind, I ask the second question-- what do we do when we really do have too many PhDs? Turn down the pipeline at that point? It would already be too late for those who went through the pipeline that we purported to be mentoring. Some have argued we should reduce support for postdocs, but I think that is foolish since it "strands" people who already invested in a PhD.

Here are my thoughts on solutions, and I argue that these need to happen now, not later:

A) It's nothing short of criminal for graduate programs and advisers to fail to prepare students for diverse career possibilities. Some trainees may even prefer (gasp!) a non-academic route, not because it'd be any less competitive but because they prefer the work. How do we prepare such students? Some of the skills overlap between some non-academic and academic positions, such as project management, creatively thinking about science, rigor in science, some hands-on techniques, etc. But some aspects are not emphasized in PhD programs because they're less critical for securing research faculty positions than for other careers--developing a portfolio of writing pieces aimed at the general public, interfacing and networking with industry representatives, getting "real" teaching experience (a TA-ship is not "teaching"), etc.

We must engage students early in their PhD training to think seriously about their directions and advise them on how to best-prepare for their chosen routes. Implementing only a "one day workshop on diverse careers" is a pathetic solution to a real problem. The myIDP questionnaire is a good starting point. Longer-term programmatic solutions need to be in place, as well as individual faculty efforts. For example, I now insist all of my PhD students get trained in at least rudimentary computer programming/ bioinformatics-- this is useful to them in many careers and certainly broadens the careers from which they can choose post-PhD.

What I say above is necessary but perhaps not sufficient-- it assumes there are enough non-academic (or non-tenure-track) positions for all PhDs.

B) We must invest more in Masters programs. I'm a big advocate of the PhD having great value, but a research thesis MS also has value, brings more breadth to one's education and job prospects, and requires far less time.

The last is potentially the most controversial:

C) PhD granting institutions may need to begin a serious discussion about scaling back the number of PhD students they admit. I stress that the elite research universities should take the lead in this and not presume that Northeastern West Virginia State University (a fictional school) should scale their PhD program back before, e.g., Yale University does. I'm not positive that the community is at the point that we must scale back PhD student numbers, but dismissing this option without a serious discussion and exploration means we're waiting for the catastrophe to happen before we are willing to even assess. Let me reiterate point #2 above-- we faculty at research universities have an intrinsic conflict-of-interest, so we should be even more vigilant to make sure we're not sacrificing those we claim to be mentoring.

Finally, let me close with a note to current PhD-seekers and postdocs.Yes, a lot of your mentors (including me) had it easier than you do. But don't despair prematurely or give up. There are still jobs out there, both academic and non-academic. Pick the direction you want to go (academic or non-academic), find out what you need to do to both secure and succeed in that direction, and pursue it wholeheartedly. Get advice from both formal and informal advisers-- take initiative on asking for this advice. Don't stop pursuing your dream because NPR interviewed someone who gave up a similar dream and is doing something different. At the same time, diversify your portfolio. Build some skills that may open what you perceive as an appealing "plan B." The reason isn't for assuming you won't get your plan A job, but is because it's always better to have options than not. And don't overly fear being actually "unemployed"-- while clearly a handful of PhDs have struggled to get "any" job, the statistics on unemployment for PhD-holders are not so bad-- certainly they're far better than for those who had a college degree only. It's almost unequivocal that you got (or will get) something for that time investment in a PhD. In the meantime, if you hear people who've held faculty positions for a decade or longer saying nothing's really different, just know that your successes will have been harder fought than theirs.


Friday, January 3, 2014

Putting College Under the MOOC Microscope



Another year draws to an end, but not before yet another"MOOCs aren't as good as college" story slips into the media (NPR, in this case). Amazing insights are present there, like that MOOCs don't provide as much personal, face-to-face interaction as one can potentially get in a college class. Wow, no one could ever have figured that out. Also, a very small fraction of people who sign up for a class (requiring in some cases literally one button-click of a mouse on a website) don't view all the lectures or complete all the assessments. Well, blow me down. And the conclusion in the article? "We have a lousy product."

Lousy??? I'm really fed up with the anti-MOOC movement, especially when it comes from within academia. Despite my snide sarcasm above, I do appreciate that much of this continued MOOC pushback is a response to the MOOC overhype that both preceded and overlapped it. What many MOOC dissenters seem to miss is that most MOOC advocates (including myself) never argued they are a "replacement" for a college education and experience. No way-- not even close. The media and a very few zealots played that line up, and they were wrong from day 1.

But let's turn the tables a bit. Let's put "in-person" college experiences under the microscope used for MOOCs. Before that, we must realize that we cannot compare completion rates for a college class and a free online product that fails to provide credentialing. Especially for introductory-level science courses (the kind I teach in genetics and evolution), the vast majority of students in college attend classes for credentialing rather than to satisfy a keen interest in the specific topics. A few months ago, I asked a room full of college students in a workshop, "How many of you look forward to 2 or more of your classes most weeks?" The answer-- one. Keep in mind all of the students there take 4-5 classes at a time, so the vast majority do not look forward to even half of what they're signed up for. Again, they are signed up for most classes because they're "required", either directly or to fulfill some sort of requirement or credit. If the students fail to complete the "in-person" college class, not only do they fail to fulfill the requirement and fail to get the credit, but they often have the black-mark of an "F" or a "W" on their permanent record. That's simply untrue for MOOCs in all respects-- if you dislike a MOOC, you simply stop watching without consequence.

How can we compare these experiences fairly then? MOOCs are like what students would be willing to look at as "extra," and with no consequence for failing to complete. I looked up some statistics from my on-campus class last spring as a comparison-- every week, I provided online resources (often podcasts or pdfs) that were truly "extra"... the resources were available on the same webpage as required materials for each week, and the resources complemented what was discussed in the lectures. There were 452 students enrolled. The very first such resource was viewed 100 times. How does this (100/452) compare to the MOOC criticism of "About half who registered for a class ever viewed a lecture"? Again, these were students already in a college class on this subject, and it was material pre-identified for them as relevant. If you look at the supplements from the end of the semester, the views are in the low single digits (potentially just reflecting the times I'd open the files to confirm they uploaded). How does this compare with the MOOC criticism of "completion rates averaged just 4%"?

I don't blame these on-campus students for the low uptake at all. They have career aspirations (in my case, mostly pre-med), and frankly, we've placed them into a situation where their grades matter more than what they care to learn about. If they spend time viewing my supplementary materials, that time is not spent studying for organic chemistry or physics. For every B or lower grade they get, their choices of medical schools become more limited, so they need to triage. And maybe they don't even really care about my topics, but they're forced to take my class by major requirements. None of this is true for MOOCs. Further, as I've argued previously, many college classes effectively focus on stratifying students (the essence of a "curve"), and far too little ensuring that all students who want to be engaged and learn are successful in doing so. MOOCs don't concern themselves with stratification at all-- it's all about engaging and learning for an interested audience. I wonder if college was once that way, centuries ago.

Back to MOOCs, let's drop the percentages and look at just the final numbers. I'll use mine as an example, but I suspect you'd get similar numbers in any of them. My MOOC ran twice. Even if we pretend that only those students who completed every assessment and got a passing grade at the end were the only ones who reaped any benefit, that number still comes to ~4000 students. 4000 people from around the world quantifiably learned about genetics and evolution as a result of this MOOC. Presumably there are other students who didn't complete it but found some part of the experience personally rewarding or engaging, and they have a greater appreciation for the topic. And best of all, none, NOT ONE, of those 4000+ "had" to do it-- this was quenching a thirst for knowledge, not jumping through a pre-MCAT or biology-major hoop. I'd like to see more "lousy products" like that in the world. How many of those students enrolled would have gone to a local college instead to satisfy this particular thirst? My guess is less than 1%, if any. Finally, I like the thought experiment of what would happen if I just told my on-campus class from day 1, "You'll all get A's no matter what," (obviating the credentialing)-- how many would still be in my classroom three months later? How many years would I have to run my on-campus class under that condition to get 4000 students to have continued to month 3 of my class?

Yes, MOOCs were overhyped. They are no panacea. They don't have face-to-face interactions with knowledgeable faculty and able other students. They don't invalidate college or provide a serious alternative. They don't provide "education for all." Most of the enrolled students already have higher education, so MOOCs' contributions to equalizing opportunity are limited (if for no other reason than because of variable internet access). And they are misused by some reckless college administrations. But before we cast any more stones at MOOCs for what they "aren't", let's have colleges take a serious look in the mirror themselves at what they've become, and see how badly their faces have broken out.

Personally, MOOCs have helped me see deficits in standard on-campus college experiences. I think the overall college experience needs to be rethought in a big way. It's NOT that I think MOOCs are better or are replacing college, but they highlight college's obsessions with course requirements, with grades, with credentialing, and with hoops of various sorts in the on-campus experience. Unlike on-campus college classes, MOOCs are hoop-free and purely educational: people enroll because they think they want to learn the subject being taught, and they continue in the class if and when they stay engaged in the material and seek to commit their time to it. What a concept that would be for an on-campus introductory science classroom.

Friday, December 20, 2013

Grants and the measure of a scientist's 'worth'



All academic scientists worry about grants. In my weekly lunches with other faculty, that is often one of  the first points they raise as a source of concern. I anxiously await now a score (if any) from my most recent NIH submission. Some colleagues of mine reported either euphoria associated with their recent NSF funding or sadness with recent NSF declines. Part of the worry is about keeping the science going that we want to do. Part is about not laying off good employees. But sadly, a big part is "image" to the university and to peers. Administrations (and often peers) ask frequently about how big particular grants are and then implicitly or explicitly rank the faculty member based on that amount.

We use dollar amounts of grant funding to assess faculty all the time: for hires, yearly performance raises, tenure and promotion, etc. Part of this is reasonable-- grant funding is competitive, so in an ideal world, someone who has good, creative ideas for feasible, high-impact research should acquire grant funding more easily than someone who lacks such ideas. Further, virtually no research is "free"-- one needs salary for themselves and their labs. That research may be subsidized by the university (as part of our "9-month" salary and TA-ships for students), but it's still not free. Hence, we need money to sustain our research.

That said, many readers will agree we've become too grant-obsessed in our assessments at all levels. New faculty members are immediately dubbed "successful" or "hotshots" if they acquire funding early, whereas early publication of high-impact research in a big journal often has a lesser effect. I recall once (I'll be vague for confidentiality) when 2 assistant professors were up for tenure simultaneously. One had multiple papers in the most premier science journal and multiple others elsewhere, and with a consistent publication rate along his years, but had acquired very little funding. The other acquired federal funding early but didn't publish anything until they put out a small number of papers in the year they were up for tenure (and none is journals as prestigious as the first). The faculty tenure vote was more strongly favorable for the second than the first, citing "sustainability" as a concern for the first.

Let me use an analogy. Funding is like gas to make the engine of research run. However, comparing faculty based on grant dollars is like comparing two cars in how far they'll go based on how much gas is in their tank. Many scientists are like Toyota Prius plug-ins (beyond their interest in reducing emissions)-- yes, they need gas, but they can go very, very far with a small amount (~58mpg). Other scientists may be more like an 8-cylinder Chevrolet Camaro (~14mpg), or even a coach bus (~6mpg). There is even empirical evidence that very heavily funded labs, on average, produce less per dollar than mid-sized labs (ref).

Again, research isn't free, and sustainability is a concern, so we should not ignore funding. However, I will argue that IF we are to use grant dollars as part of a measure of evaluation, we should simultaneously consider that investigator's track record of "research efficiency per dollar" (like gas mileage). How many impactful new insights have they published per dollar? Shouldn't we be in awe of those who publish high impact work while using up less taxpayer money (and thus leave more for other research)? Shouldn't we consider research sustainability not just by how much you have, but how well you'll do in the inevitable situation in which you have little research funding? There are multiple ways to publish in PLoS Biology, Science, or Nature--  two are "scale" (you do something that's slightly creative but on a grander scale than anyone else has done before-- clearly expensive) and "innovation" (you come up with a truly creative idea and test it-- perhaps not expensively). It's time we spent more effort giving attention and reward to the latter of those two approaches, especially now when grant dollars are preciously limited.

Monday, December 2, 2013

Grades (What are they good for?)



Teachers (college or K-12) always complain about grading, and perhaps even more about student whining about grades (see this example). Biology professors, for example, often complain about students who intend to go into medicine being"obsessed" with grades. Given the challenges of the grade-awarding process, I've been reflecting on why we grade lately, and I welcome thoughts from all of you. Personally, I find that this question segues into a more fundamental question of the purpose of formal education.

Most teachers would quickly suggest that we give grades to assess student understanding of the material covered. Some students appreciate virtually all the nuances of the material (and thus get an "A"), others have a very basic understanding (perhaps getting a "C"), and still others fail to understand the material (grade "F"). The grade thus provides feedback to the student and to the institution about how well they grasped the material covered. Fair enough.

So, let me follow with another question-- why do teachers teach material to students? Presumably, it's because the material is worthwhile, and it is thus desirable for the student to learn it. If the purpose is for students to understand and appreciate the content, then an "F" indicates a failure not just of the student but of the teacher's purpose as well. If we desire students to learn something and they fail to do so, then both student and teacher roles have failed (irrespective of whose "fault" that failure was). In this regard, our system is counterproductive to its purpose in that, if one or more students fail to learn material covered, the response is to stick an "F" label on the student and simply move on. Given there may be numerous reasons the student failed to grasp the material (including bad timing or perhaps a teaching style that did not work well), why would we not let students take more opportunities to learn a given body of material, assuming learning the material is indeed valuable?

When we talk about "tests", we think of tests in schools with grades. Here's a different example-- a driver's license test. This test is worthwhile-- it provides training that may even save the life of the awardee and gives certification of their ability. There are no grades to it-- a student passes or fails to get the certification only. If they fail two times and later master the material to pass, there is no consequence of the original failed attempts, since they are irrelevant-- all that matters is the student has now mastered the valuable material.

Our "grade-obsessed" system has an entirely different purpose-- the stratification of students. This stratification may reflect effort or ability, though we can never be certain of the relative weighting of the two in the outcome. Some of the stratification may be arbitrary, too, as some students may have been ranked low directly as a result of having one particular teacher (whose teaching style did not work for them) and not another.

Coming back to the example of premedical students, it's again unquestionable that medical schools use grades as one of their most prominent criteria for admission (along with others, such as MCAT score, rigor of coursework, letters, etc). By awarding grades, undergraduate professors facilitate their stratification of applicants. I think it's safe to argue that, all else being held constant, every non-A reduces an applicant's probability of admission to top-tier medical schools, even if only slightly. The same truth holds for undergraduate admission-- all else being equal, every non-A in high school reduces the applicant's range of schools to which they may get accepted (and the associated financial aid). How can we blame students for seeming grade-obsessed when faced with this reality?

Basically, I think the current system focuses too heavily on innate ability and luck, and gives too little to people who are willing to strive hard but were incompletely successful in their first attempt, the latter of which I think is a big predictor of eventual success. I see no reason why, like driver's license tests, we don't let people try to re-learn and re-test, as those people may in the end actually understand the topic just as well or better, but have demonstrated perseverance. In fact, with the current system, there's frequently virtually no reward to going back and trying to understand better what you didn't understand in the first place-- totally contradictory to our stated goals.

I find these facts to be very disturbing. I did not enter the educational enterprise for the purpose of stratifying students-- I would prefer that students actually learn what I teach. Some colleges allow grades to be optional for some or many classes, but even some of the more famous examples people cite (e.g., Reed College) still record grades in the end.

Can the situation be fixed? I think any solution would involve a radical change in how education works. My first thought was that we'd follow the driver's license example and report specific competencies. For example, students in a transmission genetics course could get certified for competency in their understanding of meiosis, recombination, genetic mapping, heritability, Hardy Weinberg genotypes, etc. However, that approach merely moves the problem-- what if someone only grasps these concepts at the most basic level, and then moves on as though certified with full understanding/ competency?

Honestly, I think the solution (which itself has numerous problems-- see below) is to separate the process of teaching from that of assessment/ stratification. This solution may be more feasible now than in years past, given the growth of resources available electronically. We can have still assessments in classes, but they'd be more for the students to self-assess and not for permanent records. A student would finish any genetics class they like (live, online, self-taught from books, whatever), and when they feel they are adequately prepared, take a "for-the-record" assessment. These assessments may only be taken once every semester or once every year, so they can't just keep taking it weekly. However, students can retake the assessment after the waiting period, up to some maximum number of times (maybe 3-5).

What are strengths of this approach? For teachers, they focus on teaching and not on grades. They are no longer involved in the stratification process-- their only goal is to help students learn the material. Students would better accept that "we're on the same side" with respect to learning with such a change. Again, teachers should still provide extensive in-class assessments for students to practice, but the grades of those tests are informational only. For students, there are two large benefits. First, they can learn however they feel works best for them. Those who prefer live, standard classes can do those. Those who prefer online classes can take those. Second, it provides students with a "marketplace" of opportunities. Some teachers may be known to focus on particular subsets of the material (specialties or areas of research). They can learn those areas from those teachers, and go to other teachers to learn other specialties within the scope of the assessment.

The approach has major weaknesses, though. Students would spend a lot more time researching class options and outcomes rather than just taking "the genetics class offered this semester at my school." They may also be sick or upset on the day of the test and have to wait a year to repair a weak grade from a single test (though this may already be true for heavily weighted final exams). For teachers/ professors, they give up control of tests. Much as we complain about grading and grade complaints, I suspect we'll complain more about the standardized test not focusing on what we think is most relevant. We'll probably also get pressure from students (and administrators) to match course coverage to that of what's likely to be on the test, and professors will immediately scream that their academic freedom to teach whatever and however they like is being impinged upon. (K-12 teachers already encounter this issue with state scholastic requirements.) Finally, there's the question of who actually makes these tests. I don't see that this solution is feasible, honestly, as the negatives are huge.

Are we stuck with the current system, where teachers' roles often devolve to presentation, assessment, stratification, and moving on*? Or are there alternatives? I welcome feedback from readers.

* Footnote: I realize that many teachers do a lot more than "presentation", including but not limited to one-on-one mentoring of students outside the classroom, and including on material no longer being covered in class.