RATE MY PROFESSOR
I just got my first bad review on the website Rate My Professor. But, before I want to contest that rating, I
have a few other observations about WWW.RateMyProfessor.com.
When I was an undergraduate student…back in the stone age…we didn’t
have the luxury of sitting with our computers or smart phones and seeing what
our peers had to say about a professor in whose class we were about to enroll. Instead, a group of us would sit around a table
in our student union and throw out instructor’s names. Then there would be give-and-take about who
to avoid like the plague, who’s class you wanted to take, and all of the other
“dirt” you would need to know. In
retrospect, there was a lot more quality in the “reviews” than what you see now
posted on the computer. To me, it’s
analogous to having a face-to-face discussion with a friend versus sending that
same person an e-mail. The e-mail gives
you a lot of “surface structure” but not much “deep structure.” At least, that’s how the sociolinguists would
describe it.
While I don’t have any data to support this, it seems to me the bulk of
RMP reviews are done by undergraduates.
If this is true, you stand a much better chance of being rated by your
students than if you teach graduate courses.
Again, this is only speculation on my part, but I think the
undergraduate/graduate ratio is probably close to 8:1. Again, this is only my best guess. It could even be higher. Maybe it’s closer to a 20:1 ratio. The reason that I even bring this up is
because I don’t know many graduate students who think of their professors as
“hot”, one of the scoring options on RMP.
Undergrads are driven by hormones.
Most graduate students have their hormonal needs already met. Or, maybe it’s just sour grapes. I’ve never had a student rate me as “hot”
(although my wife frequently rates me as “hot”…especially when I make a great
dinner for her!).
So, how accurate are the RMP reviews?
Again, I don’t have any hard data to support my belief that I think
they’re pretty accurate. By accurate, I
mean accurate at the extreme ends of the scale.
If you continually get stellar reviews, you’re probably a damn good
teacher. If, on the other hand, you get
comments like “the worst teacher I’ve ever had,” you had better look good and
hard at yourself in the mirror. Something
is wrong and your students deserve better.
For the bulk of professors, we probably fall somewhere between these two
extremes and that’s where any type of teacher-rating system is tricky to
interpret. An example of how one
organization is trying to get around this rating dilemma is the teacher
evaluation system being pushed by the Rhode Island Department of
Education. If you do a lot of work in
schools and have seen a lot of teachers, it takes one only a matter of minutes
by doing a walkthrough to determine who the good versus bad teachers happen to
be. RIDE, however, in its attempt to
quantify these data, is implementing a system where every teacher in the
building gets a series of thorough…if not mind-numbing…evaluations. The system is so cumbersome that it is
destined to collapse under its own weight and it will certainly be
unsustainable in its present form. Why
do it? Simply to identify the poorest
teachers in a building. (As if the
principal didn’t already know who these individuals are.) If RMP was interested in objectifying their
data, they could follow RIDE’s lead.
Which brings me to my original comment about my negative review. It was easy for me to determine the student
that supplied the rating. I only had one
person drop from my roster last year.
It was an interesting case. Of a
graduate class of 30, she was easily the weakest. On virtually all of the multiple measures I
use to evaluate students, she ranked near the bottom on all. She never participated in class
discussions. What most worried me,
however were her abysmal writing skills.
My philosophy regarding writing is that we are all amateurs practicing
our craft and there’s room for improving on all party’s part…if we are willing
to work on it. In this student’s case,
the skills were well below even the lowest benchmarks. In a profession where being able to write
clearly is paramount, in good conscience, she and I needed to sit down and work
out a plan where her writing had a chance.
I made this offer…agreeing to work with her at her convenience. My offer was rebuffed. She also needed to be excused from several
class sessions due to a medical procedure.
I have always had, and continue to have, a policy of trusting students
in these instances. I never ask for a
doctor’s excuse. Medical issues are
sometimes sensitive and it’s frankly none of my business as to why a student
needs to be treated. So, to make a long
story short, I tried to bend over backwards to help this individual. In the end, however, she still wasn’t cutting
it. As I recall, I may have counseled
her to drop the course, take care of her medical issues, I would work with her
on her writing, and she could re-enroll the following year. As I later learned…after reading her
review…this isn’t what she expected.
Instead, she lambasted me and made me look like a heartless
egotist. That night, after reading her
review, I had trouble sleeping, thinking about what she had said. I couldn’t let her skewed view of the world
and my professionalism go unanswered. I
logged on to RMP and requested a chance to rebut her rating. Suprisingly, several days later, I saw that
her review had been taken down. I’m not
sure what prompted this action, but I felt vindicated. My point in belaboring all of this, is that
there are times, when reviews are more a matter of perception than truth. Therein lies the weakness of the RMP
system. It’s a good thing to keep in mind. I suppose we all need to remember that it’s
not a perfect world out there…especially in the world of higher education and
specifically what students expect from professors and professors from students.
February 4, 2012
No comments:
Post a Comment