I've participated a few times in the adjudication process for the Social Sciences and Humanities Research Council (SSHRC) research grant programs: twice as a reviewer, and three times as chair. (Chairs have to be bilingual, so Quebec profs are often called up to serve as chair.) Regular WCI readers - and of course researchers in humanities and social sciences - will recall that SSHRC reorganised the way it runs research grants: see Frances' post here and Livio's here.
I've long been interested in the mechanics of how SSHRC allocates research funds, and I was curious to see how things had changed during the transition from the old Standard Research Grant (SRG) program to the new Insight Grant (IG) program. So when I was invited to serve again as chair for the economics committee chair for the 2014-15 exercise, I was happy to accept. That was more than a year ago, but I finally got around to writing something down about it.
SSHRC has published the statistics for the competitions going back to 1995-96, and the most recent data available are for 2015-16, the year I chaired. (The dates are for the first academic year of the grants; the adjudications for 2015-16 took place over October-March of 2014-15.) I don't know anything about what happened in the most recent exercise, and the statistics won't be published until the fall. So any conclusions that I draw here are going to be provisional; maybe things have already changed.
In 2015-16, SSHRC's concern was that IG success rates were too low. (The success rate is the number of projects funded as a percentage of applications.) Here's a graph of success rates for SSHRC as a whole over the past 20 years:
There's data for success rates by committee, but since SSHRC insists on uniform success rates across committees, these charts all look the the same broken down by committee. I don't know exactly what moral principal is being invoked by insisting on common success rates, but I'm given to understand that when SSHRC got some pushback when they experimented with giving adjudication committees a fixed budget and letting them make the tradeoff between success rates. I'll get back to this point later.
Anyway, the success rates for Insight Grants are much lower than they were for Standard Research Grants. But it's not immediately obvious there's a problem here, because IG awards 5-year grants as opposed to the SRG's 3-year grants.
To illustrate the point, suppose that the success rate is 1/3, and that there are 300 applications a year. That means that in each year, 100 projects get funded, and 200 don't. That's the flow story. If you look at stocks, that means that in a given year, there are 300 funded researchers (one-third of whom are up for renewal), and 200 who are unfunded. Happily, there is some mobility between these two groups.
Now let's increase grant durations to 5 years and that the stock situation remains the same. In a given year, 60 funded projects - one-fifth of the stock of 300 - would be up for renewal along with the 200 unfunded projects. To maintain the stock situation - why would it change? - the success rate would have to fall to 60/260 = 23%.
So we really shouldn't be surprised that a grant program with five-year awards would have a success rate lower than one with three-year awards. If you perform this sort of thought experiment with the three-year SRG success rates before 2012, you'd get the sort of five-year IG success rates that we've seen since 2012.
But that doesn't mean SSHRC shouldn't be concerned about success rates, because there's another factor at work here: money.
This is the first of a four-part series
II - Cutting and restoring budgets
Steve: Despite the fact that I have pretty much given up on SSHRC as a consistent source of research funding given the low success rates, I am looking forward to whatever insights your series will offer. Cheers. Livio.
Posted by: Livio Di Matteo | August 09, 2016 at 08:20 PM
What I'm interested in is what a 25% success rate does to the number of people applying. Who are these 240 people who think that they're in the top 25%, but aren't? Given that the identity and publication record of the winners is known, wouldn't you expect people to work out the level of publication success that's necessary to win, and withdraw if they're not close to a winning level? Perhaps that's the subject of your next post?
Posted by: Frances Woolley | August 10, 2016 at 08:14 AM
The last one.
Posted by: Stephen Gordon | August 10, 2016 at 08:29 AM
Curious about overall changes to the size of the applicant pool - I have a strong sense that it has grown over time, which would compress the success rate.
Also, many IGs in the first years of the competition weren't awarded for five years (possibly even the majority?), remained as three year grants. It seems to me that the bigger change was upping the budget from a cap of $250k to $500k, rather than the change from three year maximum to five.
Posted by: Bartbeaty | August 11, 2016 at 12:45 PM