Summary: Harvey Weingarten, President of the Higher Education Quality Council of Ontario, describes learning outcomes as a "game-changer." A report by Richard Shireman argues that setting out learning outcomes has become, in some cases, nothing more than "worthless bean-counting and cataloging." In this extract from the draft version of my CEA Presidential Address, I talk about how, in theory, learning outcomes are supposed to improve university education, and why professors might rationally resist their imposition. I argue that the keys to using learning outcomes to improve university teaching are
- persuading faculty members to sit down and have conversations about curriculum, teaching, and student assessment. This may require good management and leadership
- giving instructors regular and collegial feedback on their teaching performance and methods and
- mandating the teaching of core skills.
Learning outcomes may facilitate getting faculty members to think carefully about curriculum design, or giving instructors feedback on their teaching - but are neither necessary nor sufficient conditions for these things to happen.
The argument:
Inadequate teaching in universities is an old problem. The latest attempt to solve this problem involves the use of National Qualifications Frameworks (NQFs) and Student Learning Outcomes (SLOs). Roughly speaking, qualification frameworks state the learning outcomes or competencies students need to demonstrate in order to be awarded an educational credential. These frameworks have many goals. With respect to teaching, the hope is that by specifying, in general terms, what students are expected to learn in university, it will be possible to make the education system more responsive to the demands of students and potential employers, and provide a framework for quality assurance (Allais, 2010).
Without using the qualifications terminology, the province of Ontario has embraced the idea of learning outcomes as part of its quality assurance framework. The experience in this province reveals the challenges any attempt to motivate better (or more relevant) teaching faces. Outlining these challenges, and detailing possible responses, points to specific interventions that can motivate better (and more relevant) teaching. The Ontario experience has some general interest in that just about every university in Canada uses learning outcomes to some extent, even if it is only in a limited number of programs (MacFarlane and Brumwell, 2016).
The Ontario learning outcomes process begins at the provincial level. There, broad degree-level expectations are set. These are skills that a student should have, regardless of their course of study. As is typically the case for qualifications frameworks, these include general labour market skills, such as “The ability to communicate accurately and reliably, orally and in writing to a range of audiences.” (Ontario Universities Council on Quality Assurance, 2016), as well as the type of specialized subject knowledge that prepares students for graduate school. Each program has broad latitude to define its own specific learning outcomes, as long as all of the degree-level expectations are met. For example, an economics department might expect that their students completing an undergraduate degree have “…an ability to use economic methods to discuss, evaluate and propose economic policy”. Once a program’s learning outcomes are approved, course offerings and individual course curricula are then reviewed, with a view to ensuring that a student entering the program would have a reasonable expectation of achieving these learning outcomes.
Ontario’s learning outcomes initiative can, some hope, “modernize teaching and learning across the province”. (HECQO, undated). As is typically the case with qualifications frameworks, it aims to create a framework for assessment of teaching, and also to provide an opportunity “to reflect on, understand and improve current teaching practices” (Goff et al, 2015).
Thinking carefully about what students are expected to learn in a program, and whether or not the course content, student assessment methods, and the overall program structure supports that learning, is a good idea. Indeed, such an exercise would seem to be an essential part of curriculum design. The question is not “do learning outcomes have value?” but rather “How much can a mandated process of defining learning outcomes be expected to change current course content and teaching methods?”
A process of reflection and understanding might instigate fundamental change if the status quo reflected thoughtless and unreflective curricular design. However university teaching is what it is in part because the current curriculum serves certain interests, and performs certain functions. If professors have not adopted the latest and best pedagogical practices, there is likely a reason why. That reason might be ignorance, in which case a process of reflection might help. But if professors suspect that pedagogical innovation will add to their workload, or that the latest fashion in teaching practice is not well-supported by rigorous empirical evidence, a process of reflection and understanding will not persuade them to change their teaching methods.
There are, likewise, reasons for professors to resist the process of mapping course content to learning outcomes. For example, degree level expectations such as demonstrating “critical thinking and analytical skills inside and outside the discipline” are sufficiently vague that professors could, quite reasonably, believe that attempting to match program elements and specific course components to these expectations is largely a waste of time. On the other hand, to the extent that degree level expectations are sufficiently specific that meeting them requires the creation of new course material, that then involves new course preparation and, in some cases, teaching material that the professor may not be comfortable teaching. Moreover, many professors have stronger loyalties to their discipline, and their disciplinary colleagues, than to their employer, or to government funding agencies. A desire to maintain prestige and status with their disciplinary colleagues in other institutions gives professors a reason to stick to disciplinary norms around course content, so that students are well-prepared for graduate school or the academic job market, and reflect well upon their former professors. Likewise, professors may regard the implicit qualifications framework defined by an international set of disciplinary norms as being more authoritative and more legitimate than government-mandated qualification frameworks, and hence regard compliance with the latter as largely a waste of time.
The limited literature suggests that these concerns about the effectiveness of learning outcomes and qualification frameworks are well-founded. Shireman (2016) is somewhat harsh when he says that the student learning outcomes movement “is steering colleges toward …worthless bean-counting and cataloging exercises that give faculty members every reason to ignore or reject the approach.” A more measured assessment is provided by Allais (2010) who writes, “Expectations that qualifications frameworks can achieve the ambitious policy objectives claimed for them in relatively limited time periods seem to be ill-founded” (Allais, 2010, p. 2). A plausible reason for the lack of dramatic results is provided by Raffe (2013) [gated], who observes, “With respect to many of their objectives [qualification] frameworks provide tools for change rather than the agents of change; the tools will only be used if incentives or requirements are built in to the framework or provided through other policy measures.”
In Ontario, the learning outcomes initiative was, at least by some, envisioned as something that would create incentives for better teaching. Learning outcomes would be measured and performance rewarded. HEQCO president Harvey Weingarten (2014) suggested in a blog post that, “Ontario’s colleges and universities could evaluate entering and exiting literacy skills in all of their students as the critical first step in a comprehensive assessment of the achievement of desired learning outcomes.” When the assessment of learning outcomes becomes a reality, it would be possible to change “our institutional funding from enrolment-based to performance-based”.
Unfortunately, realizing Dr Weingarten’s vision turned out not to be entirely straightforward. University students resist writing extra tests, and the higher order skills taught in university are not amenable to standardized testing. Even these obstacles could be overcome, the US K-12 experience suggests that high-stakes testing can have unintended consequences, because some schools find it easier to game the test (by, for example, temporarily suspending students who might pull down the school’s performance on test day) than to improve test scores through better teaching (Deming and Figlio, 2016). Moreover, even if university students could be assessed through some kind of standardized test, it would be extraordinarily difficult to link student performance to individual courses or teachers. But without such a link, there is no way of rewarding instructors who do an exceptional job of facilitating student learning.
An alternative way of assessing teaching is suggested by the finding of Dobbie and Fryer (2013) that, in the K-12 context, observing teachers in the classroom, and giving them frequent feedback on their performance, is associated with student success (Dobbie and Fryer, 2013). In a university context, this type of feedback could be expanded to reviewing assignments, final exams and essay rubrics in a collegial fashion, or sitting in on student presentations in colleagues’ classes. Dobbie and Fryer’s results suggests that a variation of a learning outcomes approach that emphasized frequent and on-going feedback could improve teaching. But this would come the cost of devoting the time of people with subject matter expertise and pedagogical knowledge to assessment exercises. Moreover, it is not obvious how Dobbie and Fryer’s (2013) findings would translate into the predominately unionized university environment, where there is less that could be at stake in such exercises.
Aside from assessment, learning outcomes can have real effects in another way: by imposing constraints on the curriculum. This suggests that, if we wish to find places where learning outcomes will have real effects, the place to look is where the requirements contained in the degree level expectations that are likely to be binding constraints. In a highly quantitative economics program, for example, the expectation that students will graduate with communication skills could be a binding constraint on the curriculum design. This constraint could, in theory, be satisfied by teaching existing courses differently. In econometrics, for example, this might mean placing more emphasis on data visualization and on the communication of empirical results, and less on standard topics. But how can faculty members be persuaded to change?
Given the difficulty of coercing the tenured, getting faculty to change what and how they teach typically involves persuasion. Good leadership could be one way of achieving that faculty buy-in. There is a growing body of K-12 evidence that find that management matters (Bloom, N., Lemos, R., Sadun, R., and Van Reenen, J. (2015)), and that good principals lead to better student performance (and bad ones to worse) (Dhuey and Smith, 2014; Coelli and Green, 2012). Indeed, the learning outcomes literature typically points to leadership and institutional culture as key factors facilitating success. Unfortunately there are few incentives for research-active faculty to take on management positions within the university. Consequently, having leadership in place who have the trust of faculty can be part of the problem, rather than part of the solution.
Another obstacle to change is that, when new learning outcomes are introduced, existing faculty may be simply unable to teach to them. For example, in a discipline like economics, faculty with specialized subject-matter expertise may simply not know how to help students – some of whom come to university with deficient literacy skills (Weingarten, 2014) - learn how to communicate. In this case, requiring, for example, the teaching of communication skills may lead not to a change in what existing faculty teach, but rather the hiring of instructors specialized in teaching communication skills. Some faculty members might view this as a positive development, and indeed welcome the opportunity to shed their more banal responsibilities, and specialize on their research. This, however, assumes that there will be an on-going willingness on the part of students and governments to continue funding these research efforts.
In sum, learning outcomes are a relatively new approach to motivating good teaching. Yet, to the extent that they will succeed, it will be in old-fashioned ways: by persuading faculty members to sit down and have conversations about curriculum, teaching, and student assessment, by giving instructors feedback on their teaching performance and methods, and by mandating the teaching of core skills. Yet, in my experience, even achieving these minimal goals for a learning assessment process will not be easy, because of the structural rigidities within the university system.
It seems perfectly reasonable to spend $35 billion a year, and four or more years of two million students' lives, without any idea of what the process is accomplishing, and any attempt to change that would just lower the dignity of the professoriate.
In other news, pushing spaghetti into cans is still hard. (I'm sorry, I know that it's a homely, working-class metaphor, but I can't get over the cynicism that forty years of observing supply-side education has instilled in me.)
Posted by: Erik Lund | August 04, 2018 at 05:28 PM
Erik - I hear you!
Posted by: Frances Woolley | August 04, 2018 at 05:43 PM
I dropped out of HS to read, books like safeguarding suitcase nukes. It would take 8 years to get 2.5 years worth of RF coil engineering from school. Picking my favourite fields, geoneutrnios, RF coil design, entangled radiowaves, all-optical computing and quantum structural health monitoring, I've located 50 authors of papers.
Waterloo has 10. Toronto and Ottawa 7 each. Sudbury 4. London 3. Cgy 3. Mtl 4. Van 3. Kingston 2. And Victoria, QC, Sherbrooke, Wpg, Fredericton, Hamilton with one. Toronto, Mtl, and Ottawa would have the bulk of the bad AI research. After AI gained momentum school for all is no longer a good principal.
Posted by: robots2005 AI32080 | August 07, 2018 at 02:26 PM
I don't know the Canadian situation, but in the States curricula are already pretty constrained. I have university degrees in chemistry and accounting (long story).
The requirements for a chemistry degree are spelled out by the American Chemical Society. Basically, you need a year each of general, organic, inorganic, physical, and analytical chemistry. Physical chemistry won't make much sense without some physics, which won't make much sense without multivariate calculus. So that's the major's requirements almost everywhere in the States, with degree-level expectations set by regional university accreditation boards.*
My accounting degree was likewise constrained by the requirements of the Certified Public Accountant licensing exam, the National Association of State Boards of Accountancy, and university accreditation boards. There just aren't a lot of degrees of freedom in the system.
*For US universities, accreditation is theoretically optional but practically mandatory. There are national accreditation boards, but they don't count because nobody trusts them.
Posted by: Jay | August 14, 2018 at 06:28 PM
Some potential students already have the course material or life material. For J.Trudeau, it was pointless to take French courses; I've read 3 good works: Candide, Mort D-Arthur, and Tin Flute. The best disaster response courses I've seen were in the U of Ottawa communications degree; our media would respond to tech risks better than GWB or Japan officials. Instead of McGill, those courses and quantum courses would be best. Carleton's medical imaging isn't up to snuff though. At UBC, there are TRIUMF courses and STITCH: eventually RF Coil imaging will be wearable.
I also considered, ahem, Ryerson with many Letters of Permission. Ryerson is more liberal in chooses your courses. But there still too many prerequisites until you finish two years of forced courses. A solution is testing allowing one to choose an elective instead. A turbulence course for aerospace covers naval design too. There is usually a programming course with science/engineering. Mechanical/medical-equipment engineering aren't even one seventh RF Coil and sensor design. Of my 3 HS elective, Graphic Arts was bad, Autocad gave me my first knowledge of hackable infrastructures 5 years before temp labour. And foods has been handy since the last PM. The Power Mech classes needed rocket motors, and Plastics needed aerospace. But the library had several nuclear war books; is maybe the best University feature. There was only MB space for an international college student with their cheap gvmt.
Canadian student in several cities are smart enough to realize being a good person matters. Technological risks, Cdn and UK history, psychology or cognitive science or neuro courses, should be part of the core. You can get psychology from English (learning communications). Chastity through an adventure would be good for the texting generation. The ON correspondence Grade 12 english course requires either reading a weak person or a hick. It could be replaced by Robocalypse or Black Destroyer (or Ovid), getting kids ready for change. Outside of the USSR, middle management has disappeared with automation. Students know they might not have a middle class job waiting for them. Russia teaches a bit of shops classes which ensures manhood and flexibility. I'd put my junior high GATE class against any Catholic University when it comes to the reasoning that breeds communications and I wouldn't rant against these risks if they were PM sometimes.
Curriculums can be fixed to give a gentleman motivation. The LEI can be turned into a small component of itself. The US Census Industrial Production stats can be winnowed to focus on things like sounding rockets and supplies in Faraday Cages and GMO-ed mushrooms.
Posted by: robots2005 AI32080 | August 17, 2018 at 10:42 AM
Was York with flexible curriculum. I extended the above 50 Canadian paper/patent authors to 100 globally, tallying some non-lead authors for key papers. USA 45 (NYC 7, SF 5, Phil 3, Pitt 3, Boston 3), Germany 8 (Erlangen 3), China 7 (Wuhan 4), UK 4 (Oxford 3), Switz 4, Moscow 4, Japan 3, Paris 3, S.Korea 3, Neth 3, Italy 3, Tel Aviv 3, Italy 3, Fin 2, Can 2, Iran 1, Bulg 1, Australia 1.
Cambridge has become an AI centre while Oxford did well here; I wouldn't want good Cambridge learning outcomes. If it is to hand students off to employers, your students are trained for an obsolete economy. Again, These outcomes are suggesting U of Ottawa's disaster response courses. A common core of two semesters here and another two semesters of options seems reasonable. It would be nice to be able to take a 2 year course streamlined to my chosen career: inventing new medical imaging, or lithosphere imaging. I was thinking of taking a lab 7 times at U of T for seven different superconductor experiments before I figured out RF Coils. There will be an experiment to shine a (neutrino making) laser at a laser and get light. It wasn't funded during 'Nam, and I guess the researcher got too old. An A-Bomb researcher might've found this effect in the '60s, and mixing neutrinos in the Sun also suggest it. But by the logic of training researchers for employers, his most important almost contribution is nothing and his Nuclear Bomb simulations or whatever, are everything.
The above list suggests the USA's multinationals have been a good strategy. Wuhan maybe has been attracting China's best opticians for 60 years. TRIUMF and RIM's donations built us up apart from medical imaging hubs. The latter requires scale. Wpg was maybe the 4th best medical imaging hub a few years back, but has fallen back. And MTS wasn't as big as Telus. From what I can tell, the correct course is to brain image away WMD professors and employees, and hire such that know or are learning solutions. It is training and testing with a helmet. It is communist to care about average researchers.
Posted by: robots2005 AI32080 | August 31, 2018 at 11:52 AM