Universities across the Western world are now educating a majority of high school graduates. What are they being taught? How high are standards? Why is freedom of expression under attack on campus? These questions and many more are the focus of a new book, Campus Meltdown The Deepening Crisis in Australian Universities edited by William Coleman. At The Sydney Institute on Wednesday 20 November 2019, Dr William Coleman, Associate Professor at ANU College of Business and Economics, joined Dr Gigi Foster, Associate Professor at the School of Economics in the University of NSW and a contributor to Campus Meltdown to discuss the problem on campus.

THE CRISIS ON CAMPUS

GIGI FOSTER

I’m very happy to be here at The Sydney Institute. I was invited originally by William Coleman to contribute a chapter to his book. The title of my chapter is “Teaching Quality in Australian Universities; Present Measures, Politics and Possibilities”. I would like to take you through three main themes in what I’m going to talk about. The first is, Why do we need a chapter on this? Secondly, I would like to go through some of the political and economic realities that sustain some of the poorer teaching that we see. Thirdly, I would like to talk a little bit more optimistically about what we could do, potentially, given the constraints that William so articulately described.

First, why do we need this sort of chapter? Those of you who don’t work in universities may not be aware of how we currently measure teaching quality.

Those of you who don’t work in universities may not be aware of how we currently measure teaching quality.

 So I’ll talk about that.

What actually is the objective of measuring the quality of teaching?

We may wish to measure the quality of teaching to try to judge whether or not learning has occurred in a course. Has learning happened, in relation to the material being presented in that course? In relation to the horizons that have been opened for the student sitting in that course, imbibing the material and the discussions and the enlightenment that may be happening there?

We may also wish to maximise students’ capacity for further learning and indeed for further participation in civil life. We may see universities as one of the places that support and maintain the peaceful and sensible functioning of our societies through the creation of people who will be working in industries and government, and be stewards of all manner of different institutions which powerfully underpin our economy and essentially ensure our stable democracy continues.

We may also wish to maximise process utility. Learning is fun, or should be fun, and when it’s not fun maybe that’s something we can improve without sacrificing learning, to some extent. Of course, there may be a conflict between the transformational aspects of learning and the personal comfort of the learner.

Then there are more optical objectives. One may wish to be seen to be a good teacher. A university may wish to be seen to be a place at which good teaching happens in some abstract sense. And these optical incentives, which are not directly related to learning, do reflect the sort of incentives that William has been talking about.

these optical incentives, which are not directly related to learning, do reflect the sort of incentives that William has been talking about.

Another thing about teaching is that optimal teaching differs by student. As a teacher, I can tell you that it’s much more challenging to achieve any of those learning objectives when you are facing a very diverse classroom. Different students – with different backgrounds, initial belief systems, orientations towards the world, ability levels, and interest in the material – all require different types of treatment in order to be enthused and to learn. That’s very hard to deliver, particularly in the modern university where face time is shorter than it was when I was in university, and especially when students endemically do not come to classes. Your opportunities to touch those people in the way they must be touched in order to grow, in order to learn, are reduced. And, so, the diversity we see in today’s classrooms is essentially a big barrier to some of the quality teaching that I remember from when I was at university.

The diversity we see in today’s classrooms is essentially a big barrier to some of the quality teaching that I remember from when I was at university.

Now, what are the teaching quality measures that we use today in Australia? First of all, in most universities – and this will include William’s and mine (the ANU and UNSW) – there will be surveys at the end or towards the end of the term which ask students to rate the quality of teaching and the quality of the course along a number of different dimensions. Typically, the question from each of those two surveys – the course specific one and the teacher specific one – that is honed in and used in policy making, promotions, career discussions and otherwise, is a Likert-scaled answer to a statement such as: “Overall I was satisfied with the quality of this course”; or “Overall I was satisfied with the quality of this teacher”. The student answers on a scale of Strongly Agree to Strongly Disagree to that question, and then the average you get from the students who are responding is attributed to you as the teacher. That is then used in counselling you or congratulating you, as the case may be, for how well or badly you’ve done as a teacher. And that is basically the end of it. In your course, in that term, were the students satisfied with the quality of your teaching?

There are of course other measures of teaching quality that we have in Australia. One of them is the First-year Experience Survey, which is run every five years by the Melbourne Institute of Higher Education. It is focused on the first year of university, not throughout the whole career of an undergraduate student.

We also have QILT, and QILT is a big one. It’s done every year and you get feedback at a very high level about the quality of teaching in different faculties in different universities. That information is then filtered through to different faculties and there is no matching of the data that is in QILT, its “student experience” survey data, to individual teachers or individual courses.

That information is then filtered through to different faculties and there is no matching of the data that is in QILT, its “student experience” survey data, to individual teachers or individual courses.

 It’s faculty-by-university level data.  There’s also the Course Experience Questionnaire which has a few questions about teaching, but not that much. That’s done after students graduate. It is what used to be the Graduate Destination Survey. Again, we cannot match the answers on those surveys to individual teachers.

You would think that, in today’s modern age of data and technology, that we would be able to leverage the vast quantities of information that universities collect on a routine basis every hour of every day from the signals coming through and work out whether students are doing better or worse when they have been exposed to a particular teacher in a classroom. But that sort of merging and making ready of the data that universities are sitting on for analysis towards policy making simply does not happen at universities in Australia. What typically happens with the data is that it is formed into what are often called dashboards, which are cross sectional snippets of what’s happening at the present moment and the longitudinal aspect of teachers matched with students over time and growing and learning, or not so much growing and learning, is lost.

One of the reasons for this situation is that incentives matter. As economists, William and I would never be shy to say that.

One of the reasons for this situation is that incentives matter. As economists, William and I would never be shy to say that.

The kinds of incentives that are at play here are, first of all, career incentives of individual teachers, students’ incentives not to fill out surveys, and universities’ incentives to attract and retain students. Again, diversity also matters here because students from non-Western backgrounds may be looking for something different.  They may have different expectations implicitly about the bargain or the contract between themselves and the university, and they may evaluate the same behaviour by a teacher very differently.

What has happened is very much a co-optation of the teaching evaluation process to suit the non-learning, the more optical related objectives of measuring teaching based on many of these incentives, particularly by the university itself. Of course, the university is the body that sets policy in regard to the questions that appear on the course and teaching evaluation surveys at the end of every term. For example, often-times universities will mandate the inclusion of certain questions that arguably don’t have that much to do with the quality of teaching.

often-times universities will mandate the inclusion of certain questions that arguably don’t have that much to do with the quality of teaching.

Whether or not technology is being used adequately in the classroom is a very common one. There are also hot topics such as “Has this course been blended or flipped?,” as well as many things that you would not think are directly related to the quality of learning or even the quality of the experience of the student, but do tick boxes in terms of what the universities believe their KPIs to be in relation to how they look to their customer base. Often we also don’t dig much further when student satisfaction is high enough, so we don’t learn even from the good teachers what it is about what they’re doing that one could potentially learn from and distribute throughout other teaching environments.

I’m going to read from my chapter, which may be somewhat confronting and may motivate you to buy the book:

…the recent meta-analysis conducted in Uttl et al. 2017, correcting for methodological flaws in many earlier studies – many related to the improper handling of small sample sizes – suggests soberingly that students’ ratings of teachers are essentially unrelated to how much is actually learned in a given course, with student evaluations of teaching capable of explaining at most 1% of the variation in levels of student learning.  In a similar vein, Nasser-Abu Alhija 2017 finds that students associate good teaching most strongly with whether they perceive the assessment in a course to be good, and least strongly with their own long-term development as a student.  The title of a recent post on the website of the American Association of University Professors recently proclaimed bluntly that “student evaluations of teaching are not valid,”[1] and went on to provide a withering indictment of the use of teaching evaluations to measure teaching or learning quality.  This implies that if teachers who receive feedback relating to student satisfaction subsequently change their decisions about delivery, assessment, or other aspects of a course in order to maximize student satisfaction in ensuing terms, these changes may have little or no impact on the amount of learning that their students achieve.

[1] https://www.aaup.org/article/student-evaluations-teaching-are-not-valid

In sum, there are trade-offs – I’m an economist, I believe in tradeoffs – in measurement, as well as in any other type of resource allocation. At the moment the incentives that are facing the players involved with teaching quality measurement do not support the achievement of the deeper, more holistic objectives of measuring teaching quality, specifically those that have to do with learning.

At the moment the incentives that are facing the players involved with teaching quality measurement do not support the achievement of the deeper, more holistic objectives of measuring teaching quality, specifically those that have to do with learning.

Now, the incentives of the players. For staff, I would say, as a staff member, at universities we see ourselves as universally marketable overseas. We are internationally marketable, we can get jobs in other universities in other countries. And how do we do that? Not by having good teaching scores, but by publishing well. That is the reality of the career incentive of the tenured academic in pretty much any faculty in Australia. So, when you are facing that reality, what is your logical time allocation? It’s going to be to spend more time doing research and trying to get good publications, in good journals, rather than spending lots of time trying to become a better teacher.

Most departments, when you get above a certain quality threshold, a certain reputation threshold, a certain standing in the world, will accept someone into an academic position when they have a “good enough” measure of teaching quality. They’re not bad teachers, and that’s kind of enough. Whereas, the research must be very, very top notch.

They’re not bad teachers, and that’s kind of enough. Whereas, the research must be very, very top notch.

And research is extremely difficult and time consuming to produce. So, logically, rationally, many full-time academic tenured staff spend more time and more mental effort on their research than they do on their teaching. So many academics whose research quality is above a certain threshold are going to rationally minimise their teaching. This further plays into the idea that those who can’t do, teach.

There are also incentives for casual staff because there’s no carry forward of learning from a prior term when a casual staff member is hired on just a one-term contract. It’s very difficult then for whatever has gone wrong or right in that first term to be passed forward to the next term.

In term of students, obviously some genuinely want to learn. Often it is the good students who are frustrated by the emphasis on technology and flipping and MOOCS and all this new stuff. Often it is the good students who are frustrated by the emphasis on technology and flipping and MOOCS and all this new stuff.They just want to sit in a classroom, a lecture, and write down what the teacher has to say because that’s why they’re there. But, of course, many people don’t want that and there can be an argument that some of our students sitting in our classrooms today are more interested in receiving the signal of having gotten a degree at a quality English-language institution and are really not interested in the learning that comes along with that. Students of all types face the incentive: why do a survey if you don’t have to do it?

For the universities, their incentives are very heavily influenced by their heavy dependence on foreign students’ fee revenues which didn’t used to be the case when the Commonwealth was supporting universities to a higher level. I have a bit of sympathy for universities needing to support their activities somehow, and if it’s not going to be from Commonwealth support how will they do it? But I could spend an extra three hours talking about what is going on in that space, and what gives rise to the incentives. Suffice to say, the personal incentives and the institutional incentives of people who are running institutions do not support projects such as trying to better measure learning quality at universities which do not have a shiny-brass-knob kind of aspect without a clear business case. We see some of these universities’ incentives in recent traumatic news stories such as the one about the Murdoch case, where Murdoch sued a whistleblower, Gerd Schröder-Turk; also in the UTS scandal of changing thresholds of entrance to STEM for bizarre administrative reasons which were not well thought through; and even to a certain extent in the University of Sydney’s Ramsey Institute saga.

Finally, what can we do? I don’t want to be totally pessimistic. One of the things that is very easy, or should be extremely easy, to implement is to mandate survey responses by students. One of the problems we have now is that typically online surveys get less than 30 to 40 per cent response rates.

One of the problems we have now is that typically online surveys get less than 30 to 40 per cent response rates.

And, of course, the students who are least likely to respond are probably those who had the most problems – or possibly didn’t have any problems, but you don’t really know what the selection mechanism is. We could simply say to students, “You may not enrol in your next term’s classes if you don’t fill out the survey”. It’s not that hard, we have compulsory voting in this country, we could do something like that.

We could also ask more about learning, such as “has your interest in the subject gone up?” “Do you feel better prepared for later courses?” One can ask the student whether they think they’ve learned something. We don’t often do that on these surveys. We could also enable more customisation of surveys by teachers so teachers can ask things that are relevant to what they as teachers care about. We could even use measures of researcher’s ability to communicate across disciplines like the breadth of journals in which they publish, or the degree of their engagement and impact, to have a second measure of whether they’re likely to be good teachers in today’s diverse environment.

I spoke earlier about investment in the matching and analysing of data on student performance, progress and survey results within universities. There’s a massive capacity to do that and it is the Commonwealth whose job it is to stand up and require that universities provide their data in a form suitable for analysis as a condition for receiving funding.

Finally

we could reward demonstrably good teaching in promotions or via peer review of teaching, although this is only a local fix and will not overnight change the culture of academia.

we could reward demonstrably good teaching in promotions or via peer review of teaching, although this is only a local fix and will not overnight change the culture of academia.

Menu