« The Shopping Cart Puzzle, or Intermediate Micro takes on Behavioural Economics | Main | Darwin = Malthus + Sebright? Diffusion of technical change. »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

While the workload of teaching a small versus a big course can vary, I'm not sure that institutions make that much distinction themselves. I know that my institution doesn't; a professor is expected to teach some number of courses per year, and the size isn't really relevant (but the large first- and second-year classes are almost invariably taught by lecturers while the professors teach the smaller third- and fourth-year ones).

Ontario colleges use a Standard Workload Formula, which takes into account class sizes, marking type (in process, automated, essay), and whether you've taught the course before/recently.

We've just been through the research assessment process in New Zealand. This is only an assessment of research; the government hands out money for teaching based on student numbers (for domestic students -- international students pay fees), and institutions decide internally how to allocate funding and workload for their employees.

The research portfolio (based on six years' work) has three categories. The major one, worth about 70%, is 'Research Outputs' (4 nominated research outputs to be assessed for quality, plus up to 30 more to demonstrate 'a platform of research'). These have to be outputs: articles, conference papers, reports, patents, software, etc,etc. The two minor categories, worth about 15% each, are (research-based) 'Peer Esteem' and 'Contributions to the Research Environment'. That's where you put refereeing, being on conference committees, supervising research students, getting grants, prizes, invited presentations, and a partridge in a pear tree (provided it's a research-based partridge).

The whole thing is then assessed by a panel assigned to a collection of related disciplines adding up to 400-800 portfolios per panel. Each portfolio is rated independently by two panel members to give two initial scores. They then agree on a second-round scores. Finally, the whole panel discusses the portfolios and assigns the definitive scores at a very long meeting.

I think the assessment of research is pretty good (disclaimer: I was on one of the panels, so I'm biased and there are lots of things I can't say). But the process is very expensive -- both in preparing the portfolios and in assessing them. Also, there's a good case that the funding formula gives relatively too much to the top researchers, and the process doesn't even attempt to get a fair allocation of resources to individuals, just to larger units.

Thomas, thanks for sharing those insights. It sounds like a very methodical and comprehensive effort to measure research productivity. One question: how is joint authorship on publications handled in terms of weighting. For example, is the publication weighted equally across all the authors?

Livio: someone ought to do a post on that question one day. (Not me!) The only right answer is 1/number of authors. Any other answer creates perverse incentives as well as adverse selection.

Livio: how is joint authorship on publications handled in terms of weighting.

The assessment is supposed to be primarily of research quality, not quantity. For the four nominated research outputs the researcher writes explanatory comments. These tend to include something about who was responsible for what parts of the research. At least for collaborations within New Zealand, the fact that all the portfolios go to the same panel ensures a certain level of internal consistency in the claims -- you're unlikely to get two people both saying they did nearly all the work for a particular paper.

There is no prescribed method for deciding how to handle joint authorship in the other 30 research outputs; it's left up to the panel, who at least understand the norms for publication rates, co-authorship, and author order in their disciplines. I don't think there is a universal answer to this question: Nick's 1/number of authors also creates perverse incentives, just different ones.

This approach doesn't scale well: you'd need to narrow the scope of each panel to make it feasible for Canada, let alone the USA, and so you'd lose the cross-discipline comparisons that help precisely where bibliometric approaches do very badly.

@Nick, so a physicist who spent three years of his life away from home at the LHC gets 1/3000th of a paper per publication? My last paper had ~40 authors and I am on a relatively small project.

The number of authors issue is intractable for between field comparisons without some sort of norming approach. Entire fields (like Genetic Epidemiology or High energy physics) would implode into completely unproductive with a 1/n authorship rule. It is true that the current system of counting papers disadvantages fields with high barriers to publication (e.g. Economics) and is very favorable to fields with low barriers (e.g. Electrical engineering).

My guess at the best compromise would be rank people relative to field and level, presuming that the median member of any academic field is trying to be productive. The problem here is entirely people trying to get themselves classes into a lower paper count field using items from a higher count field. My guess is that judgement could handle this but that no fully automated approach is immune to "gaming".

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad