Every economics undergraduate learns that competitive markets are efficient.
But efficiency requires information. In a competitive market, a store that sells rotten apples for $2.00/lb will likely go out of business. Yet if people do not have information about the quality of apples they are buying, a store can sell bad produce and make a profit.
Universities are in the teaching business. Every semester students diligently fill out forms evaluating their professors. Why don't more universities make the information gathered through teaching evaluations public?
Perhaps student evaluations provide such poor quality information that it is worse than no information at all. Can you learn anything useful from knowing that the average teaching evaluation at Rideau University is 4.2/5 while South Eastern University has an average score of 4.1/5? In an entertaining survey, Mary Gray and Barbara Bergmann argue that student evaluations are inaccurate, demeaning and misused. For example, because students expect female professors to be helpful, "women have to (and do) spend more effort and time than men on nurturing behavior to get equivalent ratings."
Perhaps student evaluations create perverse incentives. Curtis Eaton and Mukesh Eswaran have argued that some disciplines have more rigorous grading standards than others. If students give higher evaluations to easy markers, professors have an incentive to inflate grades. Yet this is an argument against using any student-generated teaching evaluations, not against making evaluations public. (Though if professors desire small class sizes, making evaluations public could reward bad teaching/tough grading.)
Economics tells us to think about interests. Some universities - MIT, McGill, and now Dalhousie are making some course evaluations available to students. I have no idea how widespread this practice is, but I'm struck by the fact that the few universities that I've found that release teaching evaluations are relatively research-intensive. Why would research-intensive universities release teaching evaluations more readily than universities with a teaching mandate? Here's one hypothesis: at MIT professors expect to be evaluated on their research. Mediocre teaching is unfortunate, but can be outweighed by research brilliance. But how can a teaching-oriented institution justify keeping on truly poor teachers for years?
But the economy deplores a vacuum. The absence of official teaching evaluations creates a market for unofficial ones, most notably ratemyprofessors.com (complete with revenue-generating ads for student-centered programming such as MTV's "16 and pregnant"). Ratemyprofessors provides a valuable service. Like most profs, I get asked for advice about course selections. Because some people are harder markers or better teachers than others, I direct students to ratemyprofessors.
Some people argue that ratemyprofessors is unreliable because only students who love or hate a course will contribute to the ratings. This is a professor-centric view. Ratemyprofessors is written by students for students. A typical review is concrete and useful: "I love this prof! Hes is so entertaining. The only complaint I would have would be the emmense amount of reading that is required. Also, the T.A's mark all of your work, not the prof, so your grade really depends on your T.A." But there are borderline racist and offensive reviews, for example, "CAN NOT teach and it is almost impoosible to understand what he says."
The only way to prevent students from relying on ratemyprofessors is to provide alternative and better data. Perhaps, as a colleague in engineering, Donald Russell, suggested to me, if teaching evaluations were published, students would take them seriously, because they know their ratings will guide future students' choices.
That better data would have to respect students' and professors' legitimate privacy concerns. McGill's on-line evaluations can only be read by members of the university community, Dalhousie is planning to allow profs to choose to keep their evaluations private. But since keeping an evaluation private indicates that you might have something to hide, will people choose this route?
Any student-generated measure faces an insurmountable problem. Students by definition do not have specialized knowledge of the material they are being taught. When my public finance students learn that elephants are a provincial responsibility according to the Fiscal Pachyderm Relations Act of 1886, how can they be expected to know I am telling them lies? How do they know that the fanatical prof who insists upon the correct use of their, there and they're is teaching a vital job-related skill?
In the UK, the Quality Assurance Agency conducts audits of institutions' teaching practices and performance. I have no experience of the QAA process. I know from having done external assessments myself that it is easier to evaluate bureaucratic procedures (does the course outline provide all necessary information?) than intangibles (is the lecture interesting?). I also know that external assessments are a phenomenal amount of work for everyone involved. But one potential strength of the audit approach is that it can ensure high standards are maintained, and course content is not diluted. In general I believe that sharing information - for example, comparing final exams within and between universities - is a good thing.
The fundamental problem in the educational market place is information: how can you tell if your teacher is a rotten apple? If I had a proven solution to this problem I could sell it for millions - but I don't. I don't even know if public teaching evaluations would help more than they would harm.
My university (U of A) released teacher evaluations. However, they counteracted this by not letting you know who was teaching a section until it was under way. Within the first week, you could swap sections for better teachers, but only if it fit your schedule and there was space. This seems even worse, since you might already have personal experience with a horrible prof and have no power to avoid them again.
As you point out, ratemyprofessors provides the information that most students are looking for. Easy/hard marker, interesting/dull classes, lots/little homework. I'm not saying that students will always make the decision that's best for their education - being more or less rational, the desire for a good teacher is partially offset by the desire for an easy teacher. Of course, one friend of mine, a math prof, claims to have a reputation as an "easy" teacher. At least one of his courses sets a common exam for all sections, on which his students consistently outperform. So he's quite convinced that he's "easy" because he actually teaches the material well.
Further on the usefulness of ratemyprofessors, I think you've mis-classified your supposedly "racist" comment. Teaching is about communication. I've had profs with thick accents who nonetheless had the vocabulary and ability to enunciate that's required to teach well. But "impossible to understand what he says" is a perfectly valid and legitimate complaint that has little to do with race. I've had Canadian-born profs who mumble or use jargon only a grad student would understand, but even if it's an immigrant prof...do you really think that someone with borderline spoken English skills is a good teacher to an English-speaking class?
Posted by: Neil | May 13, 2010 at 10:20 AM
When I was a student at a top-tier, research oriented university, teaching evaluations where collected and the results disseminated by the student government. I ran this system for three years--its survived for 20. The student government also doled out teaching awards.
Some notable remarks:
1) The university itself had no interest in teaching evaluations. Teaching was a secondary priority and bad teaching was irrelevant to promotion decisions. Good teaching as exemplified by getting an award did seem to have some small affect. Enough that not-yet-tenured faculty (and lecturers) tended to dominate the awards.
2) Reviews were disseminated to the students. Professors did not like this and applied pressure on the provost to get the evaluation system closed down. The student government was organized as an independent 503c, so the university could only argue on comity grounds.
3) Originally the reviews where available to anyone in the world. One professor attempted to get a job at another university and was confronted by his (bad) reviews. He did not get the job. The student government was pressed harder and agreed to restrict access to the review data to students.
4) Employers in the US decline to comment on a employees past job performance to another (potential) employer. We are given very specific legal advice that we are to say nothing but to verify the period of employment, job title, and pay.
Posted by: Jon | May 13, 2010 at 11:21 AM
At U of Calgary we could access evaluations but they gave very little useful information.
Though having taken full programs in Arts and Engineering, I feel I have to agree about the borderline racists/offensive reviews especially in Engineering. Often it did not matter how good the prof was if they had any accent the students didn't care to listen.
Having moved to another university, I have found that without the evaluations available I have put more effort into talking to other students about different professors and courses than I did previously. Perhaps, if the universities are not very concerned with teaching then the student governments should fill this void by providing some forum for teacher evaluation information to be transmitted.
Posted by: Lingwhatics | May 13, 2010 at 11:37 AM
I thought the efficient market hypothesis died and was buried along with Lehman Brothers?
Posted by: asp | May 13, 2010 at 11:42 AM
The prof isn't there to be liked. The students will inevitably prefer easy graders over tough profs who (gasp) occasional fail someone.
I get pissed off at students who blame their lack of success on the profs. By the time you get to university, you should be able to deal with a prof whose a lousy teacher. If nothing else, the library is full of books. Go read some of them. The material an undergrad covers will be covered by dozens of texts.
My memory of being a student is that the pressure to get high grades is enormous. Many students exist in a state of stark terror of getting a B. The belief is that if you don't get A's, you'll be doomed to the welfare roles. Actually learning something is entirely secondary to getting high grades. Being naturally contrary and generally a stubborn jack ass, I refused to buy into the hysteria. I did all my work myself (no divvying up problem sets for me), worked like a fool, had no social life, and got one A in my entire student career, and lots of B's and C's. When I graduated, I figured my best bet was to bone up on "would you like fries with that?".
Funnily enough, I've never once been asked for my transcripts by an employer or prospective employer, and only once have I been asked to produce my degree (which sits rolled up in a forgotten corner of a closet). But I have, on a daily basis, been required to know stuff I learned while getting lots of mediocre grads and entirely too little sleep. And now I make a living knowing stuff that all those A students didn't bother to learn, so they pay me to remind them.
Posted by: Patrick | May 13, 2010 at 12:52 PM
Just for the record, MIT does not provide its students with teaching evaluations.
MIT students RUN the teaching evaluation and self-publish the results.
Posted by: Simon van Norden | May 13, 2010 at 02:58 PM
I've never understood the concept of a "professor's right to privacy" when it comes to teaching evaluations. Teaching is a professor's job, for which they are paid, and it by definition a public activity. Some universities make videotapes of the lecture available. Where does the privacy come in?
I'd also argue that student evaluations, like democratic elections, can produce lousy results, but they're much better than any other system we've found thus far. As both a student and a professor, I've found that amusement and marking can influence an evaluation, but only up to a point. My main reservation is that I don't think students are sufficiently knowledgeable to do a good evaluation of the course when they fill out surveys. They aren't really in a position to know how much of what they've learned will be of use, how much should have been covered but wasn't, how a better or worse teacher might have done it, etc.
Disclaimer: I'm a tenured prof who has had great and awful evaluations (often from the students taking the same class.)
Posted by: Simon van Norden | May 13, 2010 at 03:11 PM
Simon, thanks for the correction. I think students at MIT's cross-town rival, Harvard started the whole business in 1925 and evaluations are still public there, too. What's interesting me about these comments is how many are focusing on the legal concerns e.g. privacy as opposed to the more economic issues e.g. choice. According to a news report I found that I don't know how much to believe, French courts banned the naming of individual teachers by a home-grown ratemyprofessors type site, note2be.
Posted by: Frances Woolley | May 13, 2010 at 03:13 PM
Simon, good point on the right to privacy. Could you imagine how you'd respond if you asked a contractor for client feedback on work he'd previously done (say, referreals for previous construction projects) and he told you that would violate his right to privacy? "Thanks, I'll go with someone else".
You're right that marking only goes so far in evaluations. My old man is a University Profesor and he's much loved by his student's (at least according to his university's private evaluations and his ratemyproffessor.com evalutions) despite the fact that, according to them, he's a tough marker (and, in fairness, I'm sure is). Students will still give a professor who's a "hard" marker (which I think means he or she doesn't hand out A's like candy canes) a decent evaluation (despite complaining about being a "hard" marker) if the Professor is otherwise any good. (I'm sure, though, that professors who aren't any good like to believe that they consistently get poor evaluations because they're hard markers. It's a chicken and egg scenario, does the professor get a lousy evalution because students get poor grades, or does the professor get a lousy evaluation and the students get poor grades because the professors is a lousy teacher?).
And Frances, I agree with Neil. Complaining about a professor's communication skills isn't racist (or anywhere close to being borderline racdist), that's a bona fide complaint. Like it or not, when your job is, in part, to teach at an English (or French) language university, verbal proficiency in English (or French) is a neccesary skill for performing that job.
Moreover, that's a constructive comment that a professor can build on. People can improve their verbal communications. For example, I recall in 2nd year mathmatical economics, I had a professor whose English was attrocious. He was a smart young guy, but no one could understand what he was saying. We used to joke that he should teach the class in Mandarin so that at least the Chinese students could figure out what was going on. And I suspect that that was a point that everyone made in their course evaluations. Funny thing, though, when I had him the next year for advanced micro, his language skills had improved. He spoke slower and louder and made a conscious effort to speak more clearly. I don't recall commenting on his language skills at the end of that course.
Posted by: Bob Smith | May 13, 2010 at 04:54 PM
"Perhaps student evaluations provide such poor quality information that it is worse than no information at all."
This is quite possible. Then shouldn't the question be, "Should schools bother with teaching evaluations?" rather than "should the results be public?"
I tend to think they're useful, but I often do quite well on them, so I'm biased. One thing I always find funny is that I do exceptionally well on the "did you enjoy this course" type questions but not nearly as well on the "did you like your professor" variety. Love the course, tolerate (at best) me. Or as Tim Flannery once said about his manager Dick Williams:
'I love Dick as a manager, but I if I ever saw him when I'm through playing baseball, I'd run over him with my car.'
Needless to say I try to avoid my students when they're behind the wheel. :)
Posted by: Mike Moffatt | May 13, 2010 at 05:01 PM
Quick responses: Bob, I'm doing some research right now with ratemyprofessors (to be presented at the CEA meetings). (Which means I have a vested interest agreeing with those who say ratemyprofessors is a legitimate research tool.) My poking around suggests the same thing as other studies have found: men will get 'tough but fair' type evaluations, women do far less often. On language competency: that's a big can of worms no one wants to open. I spend a fair bit of time reading the evaluations of people I know - some of the time the complaints about accents are legitimate, sometimes it's an excuse for something else.
Mike - people respond to incentives. If there was no evaluation of teaching at all I think I'd slack off more than I do already. And my teaching evaluations tell me useful things e.g. for me, powerpoint works. And when the evaluations plummet, as they sometimes do, it reminds me life is getting out of hand.
Posted by: Frances Woolley | May 13, 2010 at 05:22 PM
I'm a woman, teaching physics at a university. I generally get good to very good teaching evaluations. Most students just fill in the bubble sheets, but some also add comments. Most of the comments I find constructive and useful, some are just the usual "I hate physics" stuff. But every so often, I get entirely sexual comments. I find these very upsetting, especially the ones suggesting that sexual violence would be a good idea. I've had to ask the office staff to remove those evaluations before I ever see them because I find I can't face my mostly male classes after. Should the students keep their right to privacy in completing these evaluations? At the moment the university is protecting their "right" to sexually harass me.
Posted by: REC | May 13, 2010 at 05:34 PM
I absolutely support teacher evaluation, but I agree that students are rather poor arbiters. Instead why not focus on student performance? One of the best measures of whether a professor taught eco100 correctly is how those same students perform in eco200. By using multilevel modeling techniques (which are quite common in the education literature) we can control for variables at both the student level (eg. SAT scores, high school average, which professor they had previously, etc.) and variables at the classroom level (such as who the professor is). You may not be able to appropriately evaluate the performance of all students, but you can at least follow students in their major subjects (where students are required to take a particular set of courses in a chronological order).
Posted by: hosertohoosier | May 13, 2010 at 07:44 PM
REC - that is deeply disturbing.
Posted by: Patrick | May 13, 2010 at 08:30 PM
REC: Wow! I'm really surprised your school sends out uncensored versions - particularly if students are writing things like that!
That being said, my wife is an academic as well and I've heard before from her and her female colleagues that they have issues reading comments left by students. I'm not sure if my wife reads hers - I should ask. I remember one (female) friend saying she didn't because "first third of them say they enjoyed the class, second third of them tell me things I already know, and the final third call me the c-word".
In general students treat female lecturers different than male ones. My wife and I have both taught in the same business program (not Ivey) to the same set of students, and she gets 'tested' by students at least 5x more than I do and gets about 5x the number of e-mails from students. Plus they talk/write in a tone to her that they'd NEVER do to me. It may be partly due to our relative stature (she's 5'3" and fairly petite - I'm 6'3" 210lbs and have been known to come to class on occasion with a black eye or fat lip from playing various sports), but I'm pretty sure it's based solely on gender. A weird thing is both the male *and* female students treat us both differently.
Posted by: Mike Moffatt | May 14, 2010 at 06:08 AM
hosertohoosie,
That's an interesting suggestion to measure professor performance by performance of their students in subsequent years, but I don't think you've got it quite right.
The problem is, I suspect that 2nd year performance may be negatively correlated with first year professor quality. Let's suppose that the likelihood of a student in Econ 100 will take Econ 200 is a function of two hard to observe attributes: (i) their inherent aptitude for economics and (ii) how much they enjoyed Econ 100. Now, I'd suggest that this isn't an unrealistic model of how students actually choose their second year courses, they pick courses in areas that they do well in and they pick courses in areas that they like (even if they don't neccesarily do well in them).
So let's think about how this model would work, consider a scenario with two Econ 100 sections taught, respectively, by Good Professor and Bad Proffessor (let's call them the "Good Section" and the "Bad Section") and where there are two typs of students may enrol in Econ 200, students with an inherent aptitude for economics (the "Gifted Students") and students who will enrol in Econ 200 if they find Econ 100 interesting (the "Interested Students"). And let's also assume, reasonably, I sduspect, that the Gifted Students, on average, will do better in Econ 200 than the Interested Students. In this simplified model, the only students from Bad Section who will take Econ 200 are the Gifted Students, because the Interested Students in that section don't find Econ 100 interesting. In contrast, the students in Good Section do enjoy Econ 100, and so not only will the Gifted Students in that section enrol in Econ 200, so will the the Interested Students.
But you can see how first year professor quality would be negatively correlated with performance in Econ 200. The average performance of students from the Good Section will be lower than the average performance of students from the Bad Section, because the students from the Goood Section include both high performing Gifted Students and lower performaing Interested Students, while the students from the Bad Section only includes the high performing Gifted Students. Unless you can identify the "Gifted Students" and the "Interested Students" (and I suspect you couldn't), second year performance in Econ 200 would be poor measure of first year professor quality.
You are on to something here, though. If this model is remotely accurate, the way to measure professor quality (at least in first year) is to look at the "retention rate", i.e., the proportion of students who took Econ 100 with one professor who then enrol in Econ 200. In this model at least, the retention rate is more likely to capture professor quality than student performance in Econ 200.
Posted by: Bob Smith | May 14, 2010 at 09:22 AM
There are some studies that look at the effect of instructor characteristics on retention. Here's one
REC - you may be depressed to learn that this study found that having a female physics prof in first year *decreased* the probability of women majoring in the subject. Perhaps they see what you're up against!
Posted by: Frances Woolley | May 14, 2010 at 10:59 AM
Mike: do you and your wife treat the students differently?
Posted by: Jon | May 14, 2010 at 11:49 AM
Interesting study, though I think their conclusion that "female faculty members do have the potential to increase
student interest in a subject as measured by course selection and major choice" overstates their actual results.
Although in some fields (math and stats, geology, journalism and sociology) you can say unambiguously that there is evidence of a positive "role model" effect, in others there is evidence of an unambigously negative role model effect (political science, economics, physics and biology), while in 5 other fields there is unambiguously no effect one way or the other.
Morever, some of the fields in which they claim there is evidence of a positive role model effect (psychology and education), the evidence is decidedly mixed. For example, it's hard to explain their result in education which suggests that having a female instructor in first year increases the likelihood that female students will take additional courses, but decreases (massively) the total hours of courses taken in that field. That result isn't obviously consistent with the "role model" thesis.
Posted by: Bob Smith | May 14, 2010 at 11:54 AM
Frances,
Thanks for touching on such an important topic. Your point about ratemyprofessors.com filling a gap is well-taken. Students have always maintained informal networks where "real" information about professors is exchanged. I mean, everyone recommended that I talk to some upperclassmen to get the real deal on profs when I entered undergrad. That's just prudent.
I sympathize with professors' reluctance to embrace formal, public evaluations, because no one likes to be shamed in public. But really, who do they they're keeping this information from by keeping informal? I mean, a few well-placed questions will reveal who's an effective teacher, who's not, who's an easy graded, who's a sexist prick, etc. It's just that it takes more work to harvest and organize it in its current, informal form.
So isn't the real question, which I think hosertohoosier and Bob Smith touch on, is HOW we structure the tools that harvest, organize, and present this information. Intelligently designed surveys, organized according to fair models, interpreted in a compassionate way, and presented in a clear format would be valuable for the vast majority of students and teachers. The only ones I see losing out are the poor teachers who have no interest in improving. They're the big winners in the current system, and I can see no compelling reason for their interests to determine anything at any institution that justifies its existence by purporting to teach anything.
A stopgap suggestions: I draft my own paper evaluations to supplement any official ones that the university uses. I also use SurveyMonkey (Google it) to create my own online evals. For the online evals, I give the entire class a small bump to their grades (perhaps 5 points out of 1,000 total for the course) if I get certain response rate. (The idea being to get them to encourage one another to participate.) I justify my informal evals as well as the point reward using the following argument: providing intelligent feedback requires students to reflect on what they have learned in the course as well as how (and how well) they learned it; these evals both provoke and capture that reflection.
REC: Those comments sound like a singularly straightforward adolescent strategy for getting under someone's skin. Terribly bad behavior, but here don't you think we have to weigh against it the value of getting honest, critical feedback from students who can give it free from fear of reprisal from a vindictive prof? I mean, if you could take action against a student because you found his/her feedback "offensive," I think that would be more or less the end of honest feedback at your institution, don't you? And that would demolish the justification for getting any kind of feedback at all. Just knowing what you've written, it sure seems like these students are trying to take a petty and impotent vengeance. Since it's you who have ostensible authority over the students, the students can't "harass" you in the legal sense of that word. (Of course, they are in fact harassing you, but only in this very limited, anonymous way. Face-to-face or signed would be different, of course.)
Posted by: J. Powers | May 14, 2010 at 12:25 PM
J. Powers, others, no support at all for a UK-type model of independent assessment e.g. having a knowledgeable outside observer come into your room, watch your class, evaluate your exams and course outlines etc? I wonder if the popularity of the current system comes from the fact that it requires relatively little effort on the part of faculty and administrators. Hand out the forms, feed them into a machine, punch out the numbers. That makes it much cheaper than, say, the UK Quality Assurance Agency or an ISO type method of assuring standards are met.
Posted by: Frances Woolley | May 14, 2010 at 12:59 PM
Undergraduate students at the University of Toronto in Arts & Science have had access to information on professors for nearly ten years through the Anti-Calendar published by the Arts & Science Student Union. However, the utility of this document is undermined by poor information elsewhere; many courses in calendars list the instructor as "TBA."
Is there a middle ground for evaluation data? For example, put the evaluations in a secure room (or better a secure password protected website) and require students etc to authenticate to access the data.
Posted by: Bruce | May 18, 2010 at 05:39 PM
Teaching evaluations ARE public. It's called Ratemyprofessors.com.
Posted by: Fancy Pants | May 20, 2010 at 01:47 AM