« An erratum on US employment flows during the recession | Main | Provincial Finances: An Estimate of “Tax Prices” »


Feed You can follow this conversation by subscribing to the comment feed for this post.

I think your survey is potentially biased, as it neglects to account for TAs who are given the freedom to construct their own rubrics. For the three classes I have TA'd for (2 Phil) and currently TA (Econ) I was told loosely what criteria to look for, but given the freedom to construct my own marking rubric.

Constructing rubrics has been very fruitful for me, as it allows me to rank students on a relative scale, and helps me remain focused after grading the same material over and over again!

Not to mention, I find that using rubrics potentially provides more feedback to students (i.e. where they went wrong) and allows me to write more constructive comments (for the few students who actually read them to improve in the future...).

Looking forward to the results!

For content-based courses, a rubric may be the same as an answer key, but for skill/performance-based courses such as a writing course, how can you have an answer key? An answer key suggests to me that there is AN answer or at least a limited set of answers. A rubric suggests to me that you are less concerned with the particular answer/response/product arrived at and more interested in its qualities.

@Scott P Bacon: When you say rubrics allow you to rank students on a relative scale, do you mean relative to one another? The whole point of a rubric is to make the assessment criterion-based rather than norm-based.

Scott, Thanks for your response. If you hadn't encountered rubrics at some point earlier in your education, do you think it would have occurred to you to make one up? Do you think that constructing rubrics is rightly the TA's job?

English-jack - This is a crucial point. The problem with a traditional answer key is that with a true/false/uncertain question a student might say "I answered true, so my answer's right, but I only got 1 out of 10, which is unfair!" A rubric, unlike an answer key, does not presume the existence of one right answer.

@English-jack.blogspot.com While I am hesitant to admit this, I meant ranking students relative to each other.

The unfortunate reality, at least in my experience, is that most teachers/instructors/professors have an idea of where their class performance should be in terms of grade distribution. They base these expectations on, I think, prior experience and past performance of previous classes. Perhaps they are also bullied by institutional administrations...

So when a professor tells me s/he expects a normal or bimodal grade distribution, with an average of 75-80% for example, I use rubrics as a tool to assist ranking student marks (percentiles?) to fulfill the expectations my professor as of me as their TA.

@@English-jack.blogspot.com "A rubric suggests to me that you are less concerned with the particular answer/response/product arrived at and more interested in its qualities."

It is true that I am much more interested in 'how' students arrive at their answer than 'what' the answer is itself. When I read a paper that is well executed with strong understanding of grammar and the like (or in the absence of strong grammar, strong diagrammatic or mathematical skill) it illustrates to me a greater understanding of the material than if the student merely provided the answer explicitly asked by the question. When I mark a mathematical/statistical assignment, I am more interested in the conclusion's interpretation and formulaic processes that are a part of the whole; not just the bottom line.

Am I alone in thinking that if a student grapples with the material, but is unsuccessful in deriving the exact, particular, answer, that they should not be penalized for at least applying their best effort in a logical and constructive way?

@@@English-jack.blogspot.com"The whole point of a rubric is to make the assessment criterion-based rather than norm-based."

I think you do yourself a disservice by not using rubrics to contrast and compare students. This has allowed me countless opportunities to comment on what student's 'should' have done to have received a higher mark. Forgive me if I misinterpreted your use of 'normative', as I am familiar with it in a few contexts...

@ Frances - first, sorry again for posting such a long reply... But this place is quite addicting...

I absolutely believe I would have developed my own grading scheme or 'rubric' with or without prior exposure. Prior experience, however, has allowed me to integrate aspects of rubrics which I found effective as a student, and leave out ones which could have been better deployed or removed all together. I also strongly believe that it is the TA's job to create a rubric, as they will be the one ultimately marking! However, the professor should be consulted at every step in any rubric's development and be reviewed upon completion; this allows, I think, for the TA and professor to see eye-to-eye on what the expectations from the class are.

Scott - I think you and English-jack may be talking somewhat at cross-purposes. With respect to your reactions in the later part of your comment:

"I also strongly believe that it is the TA's job to create a rubric, as they will be the one ultimately marking!"

Arguably creating the rubric - deciding what is a good answer and what is a not so good answer - is one of the most important parts of teaching. Are you really saying that a teaching assistant - who is often still an undergraduate student herself - might be better qualified to do it than a professor with a PhD? If this is true, why do we need professors with PhDs in the first place? (Or should I not ask because I really don't want to know the answer to this question).

I wrote a reactionary (slightly drunken) ranting comment last night, but decided against posting it. But Frances just now told me she wants to see it. So here it is:

WTF is a rubric? It sounds like something out of OISE, from the people who banned phonics, so I'm agin it.

It sounds like a poncy name for a marking key. Marking keys are for anal-retentive Psychology and Biz Skool profs (sorry Mike).

One of my worst experiences as associate dean was trying to patch up the mess from a marking key that went wrong. Some of the TAs sensibly realised that they had to ignore it, and used their own good judgement, but others followed it diligently to the letter.

Marking keys reward dull students who rote learn what the prof said and parrot it back (or at least remember the key words and write them down in random order for one tick each). They penalise bright students with original ideas who write something the prof never expected.

You never know how the students will interpret a question, and your lectures, and the textbook, till you start reading their answers. Any "marking key" gets thrown out the window (or should be) after reading the first answer.

Here's my "rubric": I forget everything about the student except the exam question in front of me. Looking only at the answer to that question, I ask myself: "Was this answer written by an A student? A B student? A C student?"

Sometimes I also ask myself: "If I didn't already understand the answer to this question, would I be able to figure it out from reading this student's answer?"

We must resist the OISEfication of universities! Schools use rubrics? OK, that's a good argument for not using them.

Rant over.

Sorry Frances, but you did say it would be funny to read it!

Rubric is red lettering to me, often an initial letter, often illuminated, in a manuscript by a scribe.

As the Complete OED (Oxford English Dictionary, http://www.oed.com ) says:

rubric, n. and adj.
[ Many more examples of first and later use, I give a few first uses quote, see entry for much more ]

A. n. I. Something traditionally written in red, and related uses. Cf. red letter n.
a. A direction in a liturgical book as to how a church service should be conducted, traditionally written or printed in red ink. Also fig.

a1400 (1300) Lay Folks Mass Bk. (Royal) (1879) l. 624 Þo robryk [a1450 Corpus Oxf. rubryke; c1450 Newnh. ribrusch] is gode vm while to loke, þo praiers to con with-outen boke.
1563 J. Foxe Actes & Monuments 891/1 The whole Canon of the Masse with the Rubrick therof, as it standeth in the Massebooke.
[ more ]

b. An established custom; a set of rules, an injunction; a general prescription.
1744 H. Walpole Let. 22 July (1955) XVIII. 479 Letter-writing is one of the first duties that the very best people let perish out of their rubric.
1798 Monthly Rev. Aug. 409 The harmony of revolutions, the counterpoise of forces, and the compensations of cold and darkness, afford matter of curious speculation: but it would be absurd to form on them the rubric of our moral duties.
1854 C. Lever Dodd Family Abroad l. 455 She expected, doubtless, that I'd follow the old rubric, with opera-boxes, bouquets, ‘marons glacées’, and so on.
1891 N.Y. Times 28 Sept. 4/5 It is the duty of independents—the duty of all voters—‥to‥‘weigh the merits and demerits of each candidate and each party’‥. No better rubric of conduct could be laid down.
[ more ]
†c. The rule of a religious order. Obs. rare—1.

d. An explanatory or prescriptive note introducing an examination paper.
1959 Oxf. Mag. 4 June 438/2 A good deal of the content of this paper will‥be retained by widening the rubric of the paper on Political Thought.
[ more ]
a. A heading of a chapter or other section in a book or manuscript, written or printed in red, or otherwise distinguished in lettering; a particular passage or sentence marked in this way. Also in extended use.
b. The heading of a statute or section of a legal code.
c.fig. A descriptive heading; a designation, a category.

†3. A calendar of saints; an entry in red letters of a name in such a calendar. Also in extended use. Obs. (rare after 17th cent.).

4. In Spanish- and Portuguese-speaking contexts: a decorative flourish attached to a signature; (also) a mark used in place of a signature. Now chiefly hist.

II. A substance used for marking in red
†5. Red ochre, ruddle. Obs
†6. A preparation for reddening the complexion. Obs. rare—1.

B. adj.
a. Of lettering: written or printed in red. Also fig.
b. Designating a pillar or post inscribed with the titles of books for sale. Now hist.
a. Chiefly poet. Red, ruddy. Now rare.
†b. Designating certain lake pigments. Obs.

@ Frances - "might [a TA] be better qualified to do it than a professor with a PhD"

I do not think so, no. If the professor has a rubric that has worked in the past, then the decision to continue using it or not should be up to them. However, if the professor trusts the TA enough, which is not always the case, then they should be able to work together (see my above comment) to fully understand why the rubric is being used in the first place. I.e. what is the professor looking for? Originality (as Nick seems to favour), regurgitation (as some professors explicitly request), a combination of the two, or yet more criteria by which to evaluate a student's response?

For the most part, I think rubrics are welcomed among TAs, whether they are supplied by the professor or generated by the TA (again with the professor's oversight) bears little difference to me.

@ Nick - if "Marking keys are for anal-retentive Psychology and Biz Skool profs (sorry Mike)", then I argue that the non-use of marking keys are for the drunkards and laggards within academia; those who mark assignments, papers, and projects with little attention to detail, but instead whimsically grade each submission as independent of one another.

As fantastical as not using answer keys/rubrics sounds, not using them rarely, in my experience, leads to an equal playing field for the students. Instead, not using rubrics/answer keys lends preferential treatment to submissions marked under 'good' circumstances (say after a bottle of red and a steak?) versus those marked under duress.

@@Nick "I forget everything about the student except the exam question in front of me. Looking only at the answer to that question, I ask myself: "Was this answer written by an A student? A B student? A C student?""

Funny you mention this, exactly how do you come to the conclusion that a student is a A, B or C student by which to further evaluate them? Surely on the appropriateness and applicability of their response to your question. But if their response is appropriate and applicable, could it not further be broken down into pieces by which provide parts of a whole? If so, then they can be evaluated using a rubric, and should be, to avoid any unnecessary discrimination.

@@@ Nick "Some of the TAs sensibly realised that they had to ignore it, and used their own good judgement, but others followed it diligently to the letter."

I agree that some flexibility is required when using rubrics, they should be as guides to keep the marker on track. Not THE set list of criteria by which the submission is objectively graded by. Keeping it flexible, but still maintaining consistency, allows for original ideas to be treated as they are, and rewarded for being so.

Scott - oh dear that made me laugh. I hope Nick reads this.

Bill - that is lovely! Thank you.

Funny that OED doesn't even allow for the meaning of rubric we are using here.I wonder what the origin of the term for this meaning was... probably from the definition of things written in red lettering, indicating how something is supposed to be done.

@Andrew F | April 18, 2012 at 05:15 PM

No, two fit Prof Woolley's concept of a rubric (marking guide)

"1.b. An established custom; a set of rules, an injunction; a general prescription.
1.d. An explanatory or prescriptive note introducing an examination paper",

and your ".I wonder what the origin of the term for this meaning was." is answered in their smattering of print citations (hard to capture the sounds of the professorial common room) in the full entry.
Go and look in the OED.

Having been a scribe, I thought of the red-letter days and making a red Initial first. From context I could see Professor Woolley's meaning, but I hadn't heard it used that way.

I'm currently completing a 2-year after-degree B.Ed. program, and I see more poorly constructed rubrics than good ones. An example:

Some evidence of further research...
Evidence of further research...
Strong evidence of further research...

Each ellipse contains the same, "including reference made to peer-reviewed sources, course readings, and other components of the program."

Well what the deuce is 'some' versus 'strong'? You have to actually define what you want, not just use qualifiers to justify yourself when a student comes complaining. How about 'a minimum of 20 peer reviewed sources' vs. 'a minimum of 10 peer reviewed sources'? Or regarding spelling and grammar errors, set a limit of 1 or 2 for an A, 3-5 for a B, etc., instead of 'few errors' or 'several errors'.

I like professors (and teachers) who define their expectations clearly, and rubrics can help. Or they can confuse and hinder. My rule of thumb is that if you have to go point by point, in class, to further define your rubric, then you probably went wrong somewhere.

Teachers also like to get the students they know well to make their own rubrics -- the students know your expectations, and by having input, it becomes clearer for them as well. I don't see this working too well in university, but I've never seen it done, so I'm not sure.

I think of a marking rubic as an answer key with marks broken down for parts of a question and/or rules for giving part marks for wrong answers.
There's no way I could mark 50 or 60 assignments consistently without making a rubic from the answer key.
Answer keys or rubics shouldn't be taken too literally for written questions, or have a really specific answer for a general question. I remember being frustrated in undergrad when I got marks taken off midterms for not having the "right" written answer (ie memorized the phrasing in the textbook), when I understood the concept and explained it correctly, but the TA didn't have a reasonable English vocabulary.

A: mechanistic marking is a big issue. It's one reason for profs to get TAs to do the exam/assignment his/her self and use that as the basis for a marking key. TAs much prefer just to be handed a marking key with X% for this and Y% for that, but if a TA isn't able to answer the questions correctly, how can he/she be expected to mark them?

James - you point that rubrics are part of making expectations clear is a good one, and there is a lot to be said for profs letting students know what they expect.

I find rigid schemes such as 1 to 2 spelling and grammar errors for an A usually get torn up after one or two papers. What would go wrong with that specific rule is that a long paper will have more spelling and grammar errors, all else being equal, than a short paper, and one might not want to penalize length per se. Also, some spelling and grammar errors are worse than others. For example, word processing software will automatically convert behaviour to behavior. Is that a spelling error? And how many marks would you take off for the incorrect use of dependant (dependant = noun, dependent = adjective)?

Bill Lee: "1.b. An established custom; a set of rules, an injunction; a general prescription."

This, I think, is where I have sympathy for Nick's position in contrast to Scott's. If someone uses a set of mechanistic rules to create a copy of the Mona Lisa, we don't figure this is great art, even if the imitation may be practically indistinguishable from the original.

Here are some key functions of education:
- developing people's originality and creativity
- sorting out more able students from less able students.

If people are told "do it this way for full marks" then it is difficult to measure people's ability to work things out for themselves, and there is little room for students to demonstrate originality and creativity.

I'll tell you, it is always scary to see, at the end of the year, how students who are able to reproduce an idea in an exam situation are completely unable to apply it in real life - whether that's students worrying about autocorrelation and structural breaks in cross-sectional data (no. it. can't. happen.) or throwing all of the stuff I've go on and on about with regard to efficient and equitable tax systems out of the window the second they encounter a real world tax problem.

"If people are told "do it this way for full marks" then it is difficult to measure people's ability to work things out for themselves"

I think it's easy to conflate/confuse "be able to do this for full marks" with "do it this way for full marks". Any student is going to prefer the latter to the former. When I was in university, I noticed that there was always a subset of students who would pressure the prof and TA's to switch from "be able to do this" to "do it this way". They'd show-up at office hours, line-up at the front of the lecture hall, and generally pester them into giving-up the magic formula for success.

The good profs blew them off (from my point of view as a go-it-alone-meddle-through-on-my-own type student this was a thing of beauty). The bad ones succumb.

Patrick - o.k., think about what you've just said, and now think about the implications of the Drummond report.

The Drummond report was very critical of the quality of higher education - the idea being that a lot of profs are lazy and slack and don't do a good job. So we're going to have learning outcomes and other ways of measuring professorial performance.

The cheapest and easiest way of assessing a professor's performance is to delegate the job to students, i.e. good profs are profs that students like.

And if students like profs who say "do it this way"...

Patrick, sorry the Ontario-centric comments, I forgot that you live in Wild Rose country. But, yes, you raise very good points.

Two questions that (most) profs really don't like to hear:

"Will this be on the exam?"

"What am I responsible for knowing?"

The relation between prof and student ought not be a principal agent problem between an employer who wants a job done and an unwilling worker who is only doing the job to get paid. In a principal agent problem we have a finely detailed strict rubric called a "contract".

It ought to be a senior partner/junior partner in a common enterprise.

I think one of the benefits of a rubric is forcing the professor to identify what it is they are actually looking for. When I started using them I had more methodological things - clarity of analysis, e.g. but eventually I realized that what I actually want from my students is a substantively correct answer - that they identify the legal issues, the applicable law and the relevant facts, and analyze them. Clarity is nice, but if the substance isn't there (and substance isn't necessarily outcome, but it can be) then there's nothing I will think is worth giving credit to.

The advantages for me has been a) ensuring better consistency across exams; b) communicating more clearly to students what I want (although they can't have the rubric in advance, I am clearer in telling them what I want); c) explaining to students why they got the grade they did. It has made marking slightly less painful (although it is still essentially excruciating).

I never mind being asked what is on the exam either, or what they don't need to know. It's a way of clarifying what I'm trying to teach and what I want them to know.

I think that makes my rubric more of an answer key, but since answers are actually what I want, I'm sticking with it.

But most students are akin to unwilling workers! They want the papers with the right things written on them so can get a job. They don't give a flying fig about knowledge or critical thinking for it's own sake. They want a ticket to German engineering in the driveway of a McMansion, tropical vacations, and granite counter tops.

Not that we should trust Wikipedia but a glance at
http://en.wikipedia.org/wiki/Rubric_(academic) or the Arabic, Spanish and Korean versions might be useful. (What do you mean you don't understand 20 languages, aren't you Canajen?)

Wiki articles are multi-authored, fought over (see the History tab at the top to see various changes over the year) and can change and be abused in front of your eyes.
Still it increases thought and brings in a few other references (one in "Technical" mentions Ontario, but this is not carried on elswhere)
And I think the term Content Inventory (cross-referenced) is more useful. [ http://en.wikipedia.org/wiki/Concept_inventory ]

When are the terms "Everyone gets an 'A'" and loses marks over the semester more useful?
"Did you include x, and y and z? in your answer, Student 713"
Outlines, Powerpoints and such are often given out during a [non-reading] lecture. Would that not be the students' course guides? I have found collapsing outlines (expand a basic 1,2,3,4 outline dynamically while the lecture goes on to 1a 1a-i 1-a-ii, 1-b, 2, 2.1, 2.2...) more useful as it expands the [ lecture ] in context. If really dynamic it can be expanded on-the-fly based on student comments, queries.

I see that there are dozens of Rubic Maker softwares, templates and such, Rubistar, Rubric Maker. More are for the help of elementary and secondary teachers.

And does Carleton and other colleges require one year of teaching in a high school once hired, to find out what is coming up in the school systems and to "get real" on the general junior student population.
Some of the best profs I had had taught in Hamilton and other high schools and were diabused of the idea of incoming university students were young adults yet, in academic preparation and conflicting course loads and requirements.
So pre-tests, initial paper proposals required, meetings on writing gooder etc. made for a happier second half.
Carleton's Student Academic Success Centre and the Carleton University Learning Commons are too general and doesn't go to the narrow Economics course requirements, (...nor feed the 60 percent of student who didn't eat breakfast and are low blood sugared brains for much of the day. Coffee don't help.) They were a sop to the high "withdrawal rates" of Cartoon U.

Rubrics (Roo briks) help the marker, whoever that may be, but are often internally verified by the composer and not verified with someone outside the course, the department, the school.
Bad rubrics don't improve marking or teaching.

@Nick: "It ought to be a senior partner/junior partner in a common enterprise."

This is an advantage to standardized exams - it does make teachers and students partners, allied together in common cause against the external examiner. It's also the case that when you're teaching something that students perceive to be a useful skill, e.g. how to drive, how to do stuff in Excel, etc., it's amazing how hard people work. 'Do I have to know this for the final exam?' basically means 'other than passing the final exam in this course, I see little benefit to me from understanding this material.'

I think it's partly that, the more that grades matter for grad school, and the more that students pay for their education, more the prof/student relationship becomes a principal/agent problem with a student wanting to get the max possible grade for the min possible work, and the prof wanting to give the min possible grade for the max possible work. And I can completely understand where students are coming from on this one.

@Alice, thanks for those comments. I'm interested that you don't give out the rubric in advance - I guess for the writing-to-the-rubric reasons that others have mentioned?

One big plus about being clear on expectations is that it means students are less likely to fail or get Cs/Ds for completely avoidable errors e.g. not including a data summary, not including a literature review, etc. Not that I ever personally manage to achieve such clarity, but whatever.

@Patrick - or just any kind of job that allows them to buy a home and raise a family. And I'm 100% sympathetic.

Bill: "does Carleton and other colleges require one year of teaching in a high school once hired"

No, but having kids go through the education system is a real eye-opener!

I agree on bad rubrics and good ones.

Looking for more on the use of 'rubrics' I skimmed ERIC (Education Resources Information Center, is the world's largest digital library of education literature) and found little.

Using their Thesaurus (a remnant of the old days of limited search tools and punch cards) vocaculary I find
Rubrics (ScoringGuides) Use Scoring Rubrics

From the ERIC Thesaurus
Scoring Rubrics
[ Expand ]
Scope Note : Evaluation tools, usually grids, that list the criteria for a task or performance, and articulate gradations of quality for each criterion
Broader Terms
Evaluation Methods

Related Terms
Alternative Assessment+
Evaluation Criteria+
Performance Based Assessment
Self Evaluation (Individuals)
Student Evaluation+
Teacher Evaluation+
Writing Evaluation

Used For
Assessment Rubrics
Rubrics (Scoring Guides)

And that's it.
There are a few articles and I tried to combine the term Economics Education.

Googling (Binging, Yahooing, Duckduckgoing) I come across a nice piece from Marymount Loyola University (Los Angeles) that used a specific Economics course rubric as an example.

And the lovely style of 'breadcrumbs' links across the top of the page shows you where they start to discuss rubrics to make better teachers in The Office of the Vice President for Academic Planning & Effectiveness.

They use the example from Barbara E. Walvoord, (Professor Emerita, University of Notre Dame) "Assessment clear and simple: A practical guide for institutions, departments, and general education." San Francisco: Jossey-Bass Publishers, 2004.
See also her Rubrics: What are they good for?
http://www.neasc.org/downloads/walvoord_plenary_handout.pdf (22 pages, 123 KoBytes)

Often in the literature is a reliance of the (Bloom's) Taxonomy of Cognitive Knowledge. (They never completed the other sets)
See for example, William Peirce's "Designing Rubrics for Assessing Higher Order Thinking"

Even the AEEE welcomes rubrics.
..."Also contributions presenting economics education and business education in specific countries as good practise, case studies or country reports are highly welcome for the rubrics without the anonymous review process, if they are interesting and understandable for an international audience."

Do economics teachers assume that students will soak up presentations like a sponge?
Is there no text of teaching effectiveness other than failure (of the student, but then we could assume that the teacher failed but won't admit it)

The comments to this entry are closed.

Search this site

  • Google

Blog powered by Typepad