Teachers (college or K-12) always complain about grading,
and perhaps even more about student whining about grades (see this example). Biology professors,
for example, often complain about students who intend to go into medicine being"obsessed" with grades. Given the challenges of the grade-awarding process,
I've been reflecting on why we grade
lately, and I welcome thoughts from all of you. Personally, I find that this
question segues into a more fundamental question of the purpose of formal
education.
Most teachers would quickly suggest that we give grades to
assess student understanding of the material covered. Some students appreciate
virtually all the nuances of the material (and thus get an "A"),
others have a very basic understanding (perhaps getting a "C"), and
still others fail to understand the material (grade "F"). The grade
thus provides feedback to the student and to the institution about how well
they grasped the material covered. Fair enough.
So, let me follow with another question-- why do teachers
teach material to students? Presumably, it's because the material is
worthwhile, and it is thus desirable for the student to learn it. If the
purpose is for students to understand and appreciate the content, then an
"F" indicates a failure not just of the student but of the teacher's
purpose as well. If we desire students to learn something and they fail to do
so, then both student and teacher roles
have failed (irrespective of whose "fault" that failure was). In this
regard, our system is counterproductive to its purpose in that, if one or more
students fail to learn material covered, the response is to stick an "F"
label on the student and simply move on. Given there may be numerous reasons
the student failed to grasp the material (including bad timing or perhaps a teaching
style that did not work well), why would we not let students take more
opportunities to learn a given body of material, assuming learning the material
is indeed valuable?
When we talk about "tests", we think of tests in
schools with grades. Here's a different example-- a driver's license test. This
test is worthwhile-- it provides training that may even save the life of the
awardee and gives certification of their ability. There are no grades to it-- a
student passes or fails to get the certification only. If they fail two times
and later master the material to pass, there is no consequence of the original
failed attempts, since they are irrelevant-- all that matters is the student
has now mastered the valuable material.
Our "grade-obsessed" system has an entirely
different purpose-- the stratification of students. This stratification may
reflect effort or ability, though we can never be certain of the relative
weighting of the two in the outcome. Some of the stratification may be
arbitrary, too, as some students may have been ranked low directly as a result
of having one particular teacher (whose teaching style did not work for them)
and not another.
Coming back to the example of premedical students, it's
again unquestionable that medical schools use grades as one of their most
prominent criteria for admission (along with others, such as MCAT score, rigor
of coursework, letters, etc). By awarding grades, undergraduate professors
facilitate their stratification of applicants. I think it's safe to argue that,
all else being held constant, every non-A reduces an applicant's probability of
admission to top-tier medical schools, even if only slightly. The same truth
holds for undergraduate admission-- all else being equal, every non-A in high
school reduces the applicant's range of schools to which they may get accepted
(and the associated financial aid). How can we blame students for seeming grade-obsessed when faced with this reality?
Basically, I think the current system focuses too heavily on innate ability and luck, and gives too little to people who are willing to strive hard but were incompletely successful in their first attempt, the latter of which I think is a big predictor of eventual success. I see no reason why, like driver's license tests, we don't let people try to re-learn and re-test, as those people may in the end actually understand the topic just as well or better, but have demonstrated perseverance. In fact, with the current system, there's frequently virtually no reward to going back and trying to understand better what you didn't understand in the first place-- totally contradictory to our stated goals.
Basically, I think the current system focuses too heavily on innate ability and luck, and gives too little to people who are willing to strive hard but were incompletely successful in their first attempt, the latter of which I think is a big predictor of eventual success. I see no reason why, like driver's license tests, we don't let people try to re-learn and re-test, as those people may in the end actually understand the topic just as well or better, but have demonstrated perseverance. In fact, with the current system, there's frequently virtually no reward to going back and trying to understand better what you didn't understand in the first place-- totally contradictory to our stated goals.
I find these facts to be very disturbing. I did not enter
the educational enterprise for the purpose of stratifying students-- I would
prefer that students actually learn what I teach. Some colleges allow grades to
be optional for some or many classes, but even some of the more famous examples
people cite (e.g., Reed College) still record grades in the end.
Can the situation be fixed? I think any solution would
involve a radical change in how education works. My first thought was that we'd
follow the driver's license example and report specific competencies. For
example, students in a transmission genetics course could get certified for
competency in their understanding of meiosis, recombination, genetic mapping,
heritability, Hardy Weinberg genotypes, etc. However, that approach merely moves
the problem-- what if someone only grasps these concepts at the most basic
level, and then moves on as though certified with full understanding/
competency?
Honestly, I think the solution (which itself has
numerous problems-- see below) is to separate the process of teaching from that of
assessment/ stratification. This solution may be more feasible now than in
years past, given the growth of resources available electronically. We can have
still assessments in classes, but they'd be more for the students to
self-assess and not for permanent records. A student would finish any genetics
class they like (live, online, self-taught from books, whatever), and when they
feel they are adequately prepared, take a "for-the-record"
assessment. These assessments may only be taken once every semester or once
every year, so they can't just keep taking it weekly. However, students can
retake the assessment after the waiting period, up to some maximum number of
times (maybe 3-5).
What are strengths of this approach? For teachers, they
focus on teaching and not on grades. They are no longer involved in the
stratification process-- their only goal is to help students learn the
material. Students would better accept that "we're on the same side"
with respect to learning with such a change. Again, teachers should still
provide extensive in-class assessments for students to practice, but the grades
of those tests are informational only. For students, there are two large
benefits. First, they can learn however they feel works best for them. Those
who prefer live, standard classes can do those. Those who prefer online classes
can take those. Second, it provides students with a "marketplace" of
opportunities. Some teachers may be known to focus on particular subsets of the
material (specialties or areas of research). They can learn those areas from
those teachers, and go to other teachers to learn other specialties within the
scope of the assessment.
The approach has major weaknesses, though. Students would spend
a lot more time researching class options and outcomes rather than just taking
"the genetics class offered this semester at my school." They may
also be sick or upset on the day of the test and have to wait a year to repair
a weak grade from a single test (though this may already be true for heavily
weighted final exams). For teachers/ professors, they give up control of tests.
Much as we complain about grading and grade complaints, I suspect we'll
complain more about the standardized test not focusing on what we think is most
relevant. We'll probably also get pressure from students (and administrators)
to match course coverage to that of what's likely to be on the test, and professors
will immediately scream that their academic freedom to teach whatever and however
they like is being impinged upon. (K-12 teachers already encounter this issue
with state scholastic requirements.) Finally, there's the question of who
actually makes these tests. I don't see that this solution is feasible,
honestly, as the negatives are huge.
Are we stuck with the current system, where teachers' roles often
devolve to presentation, assessment, stratification, and moving on*? Or are
there alternatives? I welcome feedback from readers.
* Footnote: I realize that many teachers do a lot more than
"presentation", including but not limited to one-on-one mentoring of students outside the
classroom, and including on material no longer being covered in class.
I'd like to take the opportunity to illustrate the situation here in The Netherlands, as you might not be familiar with it.
ReplyDeleteIn high school, courses are generally put together into subject. We have one mathematics subject, not separate courses for algebra, calculus, trigonometry, etc. 50% of the final grade for a subject is determined by in-class assessments, determined throughout your high school years, such as essays, tests and presentations. The other 50% is determined by a national test, taken at the end of senior year. The obvious advantage of this system is the comparability of the final grade, since every student in the country takes the same test. The disadvantage is that a large portion of senior year is spend preparing specifically for this test (and the kind of questions in it). Another disadvantage, of course: if you don't feel too good on the day of your exam, you're screwed, especially considering that all tests are taken within a period of three weeks.
So, your final grade is calculated as follows: school grade + national test grade / 2 = final grade. If the average school grade for a subject differs more than 0.5 out of 10 from the average national test grade for that school, they will be penalized as their testing is either to lenient or to strict.
Generally, we think the advantages of this system outweight the disadvantages, because it makes sure that we know exactly what a high school graduate can and cannot do and because it keeps schools in check.
As for the importance of grades, we don't have a huge difference in the quality of universities or colleges and selection often just doesn't take place. You've got your high school diploma, so you're expected to be fit for the job. If you take the right subjects in high school, you can get into your major of choice just by admitting about 90% of the time. (Incidentally, high grades are very important if you want to get into medicine. Also, you will need to do some additional work if you want to get into theater, arts, etc.)
Well, I hope that was of particular interest to you. I don't have any answers to your question, though I sincerely welcome diverse assessment, instead of merely multiple choice testing, and a better look at what kind of skills students use in real life and are therefore useful to teach (a small hint: for me, French is not one of them).
Very interesting, thanks for sharing!
DeleteYou raise some important questions in college education, not only "What do we want students to know/be able to do?" but also "How do we evaluate student learning?"
ReplyDeleteI like the idea of a self-paced curriculum where students can engage with particular modules until they show proficiency. However, I feel that this sort of structure may not be ideal for all students. When I taught at Duke, most of the students were driven to succeed. Now that I am teaching at North Carolina Central University, I find that my students lack both extrinsic (grades, jobs) and intrinsic (goals, success) motivation. For the students without clear goals, an open-style curriculum could lead to poor performance from a lack of structure or set timelines. So while self-pacing may be good for some institutions, it may not be good practice across the board.
My experiences with standardized testing have not been positive, both in my own education and in guest teaching in public school classrooms. You run into the issue of teaching to the test. In one class, the teacher mentioned the standardized test (still months away) at least 4 or 5 times in a one-hour class. The test becomes the end unto itself instead of learning the concepts or skills required for success in a field.
Instead, I propose that institutions should use a standardized test to measure the success of their teaching. Use formative assessments so that students and instructors are aware of the students' progress. Then summative assessments in alignment with the formative assessments and course goals would be used for grading. The final exam, then, could be a non-graded or participation-credit standardized test to check that students are performing to a set of national standards. Poor performance on the standardized test would reflect more on the institution than on the individual student.
Lastly, I think the best model for examinations is a one-on-one interview. Exams are static, but in an interview the instructor can ask follow-up questions to help the student, clarify a confusing statement, or test depth of knowledge. This is more akin to graduate school in the sciences where your preliminary exam and dissertation are presentations and extended conversations with faculty. Interviewing students is much more time-intensive and may not be feasible in a 100+ person course, but I feel you would have a more accurate picture of each student's abilities.
Thanks for the reply! Agreed completely on one-on-one interviews, though also about them not being easy in 100+ person courses.
DeleteMohamed,
ReplyDeleteI am totally thrilled to see a researcher of your status in the field examining these questions. I have been grappling with these questions for a few years now. I think the grading culture has led me to question staying in biology more than anything else.
As a TA, I deal all the time with students who are unsatisfied with an exam's ability to assess their actual understanding. The most common question I get after grading an exam is "But I really do understand it, why can't you give me full credit?"
I think the thing I like most about your proposal is that it gives students the idea to really delve into the topic and decide if they want to pursue it as a career, or component of a career. Much of the time I come to the conclusion I'm dealing with students who don't even know why they are in the class, and whose biggest complaint is that there's too much information and they can't take it all in. They are rarely comforted when I tell them that they might get it in ten years, or after doing an intensive research project.
I plan to use regular (e.g. weekly) ungraded assessments when I'm designing a course. The idea here is to give people feedback, not to have them judge themselves. I also would like to implement a way for students to gauge their own understanding and compare that to a score based on an assessment. Students can benefit from knowing when they don't actually understand things as well as they think they do. They can also benefit from seeing when they are improving. Sometimes the students who work the hardest and understand the material the best have the most anxiety about whether they will pass.
Great thoughts-- thanks for sharing! Helping students gauge their own understanding is fantastic! I should do more of it than I do...
Delete