As irritating as it was, SSHRC's infatuation with Research in Buzzword Studies is not why Insight Grant (IG) success rates have stayed so low, even as the budget envelope has increased. The problem is the hard-won budgetary rigour that was established during the last years of the old Standard Research Grant (SRG) program disappeared when the IG was introduced.
The increase in grant sizes isn't due to a sudden increase in grant requests during the transition from the SRG to the IG. Average annual requests actually fell:
To the extent that some projects would have fixed costs that are now being amortised over five years instead of three, you might even expect a drop in average annual requests under the IG.
But this isn't the pattern you see in awards. Near the end of the SRG era, average awards were just over 60% of average requests, down from a peak of just under 85% a few years previous. But look what happened to this ratio when the IG was introduced:
Between 2012 and 2015, successful projects received pretty much all the funding they asked for. (I'll get to 2015-16 in a minute.) Again, I'd like to blame this on the lavish amounts of cash that SSHRC chose to throw at Research on Buzzword Studies, but I can't. You see the surge in the size of grant awards in all the committees - even after taking account the fact that the IG is a 5-year program, while SRG was for 3 years:
You see the same pattern across all committees: a reduction in awards during the budget-trimming years of 2007-12, and then a rebound back to and beyond the days of 40% success rates and 30% funding rates. Remember, it's not a matter of researchers systematically asking for more money under the IG - what's happening is that the adjudication committees have stopped trimming awards budgets.
I don't know how the total budget envelope is calculated these days, but the days in which funding grows automatically with requests have definitely not returned. So for a fixed budget, smaller awards would result in higher success rates. But for reasons that are best known to SSHRC, its policy is to apply a common success rate across all committees.
You can see the problem. Suppose that the economics committee - which accounts for 5% of the total budget - manages to scrape out $150,000 from the requested budgets of successful applications. That's about the size of an average award, but that money doesn't stay in economics in order to fund another project. Instead, the economics committee gets back only $7.5k of the $150k they squeezed out of the requested economics budgets; the rest gets distributed across the other committees. So we really shouldn't be surprised that adjudication committees gave up trying to control budgets - why should they?
In the last year - 2015-16, when I most recently served as chair - SSHRC asked adjudication committees to apply some sort of budgetary rigour in order to increase success rates, and this attempt at moral suasion appears to not have been a complete failure. The ratio of average awards to average requests did fall a bit.
But this doesn't solve the collective action problem that individual adjudication committees find themselves in. It's in the interest of all to apply budgetary rigour, but it's in the individual interests of each committee to simply free ride off of the research funds generated by cuts in the other committees.
This brings us back to this experiment in which both the costs and benefits of controlling budgets were internalised within the committees. Committees were given a fixed budget envelope, and were given the discretion to trade off average award size against success rates within a given committee - and so different committees ended up with different success rates, depending on the tradeoffs they were willing to make. I'm given to understand that within SSHRC, this sort of cross-committee variation in success rates is frowned upon: 'unfair' is their preferred adjective.
But as long as SSHRC insists on common success rates across committees, and as long as it refuses to allow individual committees to retain budget savings, I don't see how SSHRC can escape the collective action problem that is at the heart of low success rates for research grants.
This is the fourth of a four-part series:
I - Why are success rates so low?
II - Cutting and restoring budgets
iii - Research in Buzzword Studies
Steve, great post - that's a collective action problem I'd never really thought about.
I wonder if you would care to speculate on the other collective action problem - i.e. that if I was to apply to SSHRC I would almost certainly be unsuccessful, but I would raise the amount of funds available for the economics committee. There is, however, no way for me to cash in on the positive externality I'm creating by applying for SSHRC. And do you think this collective action is worse in hierarchical disciplines like econ, where everyone pretty much knows how they rank compared to other potential candidates, than in mulit-paradigmatic disciplines like political science, where it can be hard to say how, say, a political economy type ranks as compared to a comparativist, say?
Posted by: Frances Woolley | August 12, 2016 at 03:38 PM
Steve:
As I mentioned in my comment on your first post in this series, I have pretty much given up on SSHRC as a consistent supporter of social science research in Canada given the low success rates that seem to have become the norm. I think the research grant system has generally always provided the most support for an elite of researchers at the largest and most research intensive universities and that is probably as it should be. At the same time, I don't believe the top 20% of researchers are the only ones worth funding and that anyone outside that top fifth is not doing any worthwhile research. Many active researchers ( as measured by publishing in refereed journals) need only fairly modest support to assist their activity and the tendency towards larger average grants for the top 20 percent simply makes the lives of active researchers outside the top 20% more difficult. Really, what separates a researcher in the top 20% from one in the top 30%? Why not create even higher standards by handing all the money to the top 5% or 10% of ranked research proposals? It a farce to argue that there simply is not enough money to fund all worthwhile projects if we are simply piling the money into whatever government bureaucrats determine the top percentiles should be. Only funding the top fifth may also reduce innovation and research in Canada in the long run if researchers outside the top fifth are simply not funded (assuming that some innovation is generated outside Canada's top 20% of university researchers) but of course that is only an opinion and I certainly cannot back that up with evidence. Not sure about you but it would be difficult to go to bat for SSHRC if a future government decides to simply wind up their funding. After all, if 80 percent of us can conduct research and publish refereed articles with no support, why do we need SSHRC? Cheers. L.
Posted by: Livio Di Matteo | August 12, 2016 at 03:48 PM
Livio - Yes, that's really the issue, that the SSHRC grant program will fall into a death spiral. Low acceptance rates leading to people abandoning SSHRC completely.
Frances - Back in 2007 or so, I circulated a letter to econ department chairs making the point that according to SSHRC budgetary policies in force at the time, more money requested for economics projects automatically translated into a bigger economics budget. We could have hired undergrads to write up 6 pages of absolute crap, added a grotesquely-inflated budget and cashed in big time. I suspect that to some extent, this is still the case.
Posted by: Stephen Gordon | August 12, 2016 at 06:15 PM