It may seem late in the day, but now that the principle of paying £9,000 a year for university seems to have been established, questions are being asked about what students get for it. The argument has long been made that they get a good job: figures from the Department for Business, Innovation and Skills show graduates earn nearly £10,000 a year more, and are far more likely to be employed, than people without degrees. But do they secure these jobs because employers are impressed by a degree, or by the extra skills graduates develop by studying for one? And while university prospectuses boast of excellent teaching and extensive libraries, are students actually learning anything?
This is the idea behind new efforts to explore so-called “learning gain”. The Higher Education Funding Council for England is about to announce around a dozen pilot projects to look at ways to measure the skills and knowledge students develop in higher education. The projects range from surveying students at the beginning of their courses and then in years two and three to test how ready they are for employment, to asking them at the beginning and then again at the end of their university studies to write essays to test their ability to analyse, synthesise and think critically.
Geoff Stoakes, head of research at the Higher Education Academy, which is working with Hefce and BIS on the initiative, and which has set up a steering committee, says: “Students need to know what is being gained by their three or four years at university. Equally, government needs to know why it is investing in that process via the loan system.”
The UK is not the only country to have woken up to the issue. Interest in learning gain has grown internationally not only because of increased tuition fees but because new technology has made it easier to track students and gather data, both on their performance and what happens to them after graduation.
It has also been influenced by publication in 2011 of the book Academically Adrift: Limited Learning on College Campuses. Written by Richard Arum of New York University and Josipa Roksa of the University of Virginia, the book draws on survey responses and the Collegiate Learning Assessment, a standardised test administered to some US students in their first semester and then at the end of their second year, this showed that a large proportion of students showed no significant improvement in skills during their first two years of college. A follow up, Aspiring Adults Adrift, published in 2014, which traced the same cohort of undergraduates as they finished college and entered the working world, was no more positive.
“I think as an educator and as a researcher it is imperative to measure learning gains,” says Arum. “The question is for what purpose, and what aspects of learning gains and outcomes are worth measuring.”
One attempt to answer this is the Assessment of Higher Education Learning Outcomes (Ahelo) run by the Organisation for Economic Cooperation and Development. It has been developing a way to test students across different institutions and countries, to discover how much they have learned. The expectation is that this could lead to league tables of the kind now compiled for schools from the OECD’s influential Programme for International Student Assessment (Pisa). South-east Asian countries, with relatively young higher education sectors, and which already perform well in the Pisa tests, have been noticeably keenest on the project, but the former UK universities minister David Willetts also made positive noises about it. He suggested it could form the basis for a new UK measurement of teaching quality.
Earlier this month, however, it was announced that England would not be taking part in Ahelo, and many have greeted the news with relief. “I thought the whole thing was a nonsense,” says Peter Williams, former chief executive of the Quality Assurance Agency, who took part in the initial Ahelo meetings. “I was very pleased to hear they aren’t going forward with it. It seemed to me impossible.”
Alison Wolf, Sir Roy Griffiths professor of public sector management at King’s College London, says: “It is basically impossible to create complex language-based test items which are comparable in difficulty when translated into a whole lot of different languages. And that is before you even start on whether a given set of items can possibly be equally appropriate regardless of the subject studied or the very different nature of higher education courses in different countries, or the level of similarity between OECD question formats and those used for assessment in the system concerned.”
Spencer Wilson, a spokesman for the OECD, says that in spite of a 31 May deadline for countries to decide whether or not to take part, the OECD is still waiting for a full set of responses and is unwilling to comment further until it has received them.
Earlier this year, Andreas Schleicher, the OECD’s director for education and skills, told Times Higher Education that institutions with long-established reputations had potentially much to lose in the short-term from the project because it would create a more level playing field, not influenced by past reputation.
Similar issues could affect attempts to measure learning gain in the UK, suggests Charlie Ball, deputy director of research at the Higher Education Careers Services Unit. “Some research institutions have good reason to be worried – they won’t necessarily come out the right side of a value-added measure,” he says. “And they have the resources and political links to modify things in their favour, which adds an even more interesting dimension.”
William Locke, reader in higher education studies at University College London’s Institute of Education, says it is nevertheless essential that any attempt to measure learning gain should record value-added or “distance travelled rather than simply what students come out with in terms of degree qualifications or jobs”. This is because these outputs are not entirely under institutions’ control. Often they are influenced by schooling, social background and the level of qualifications students have when they enter university. Another problem, he points out, is that higher education is so much about independent learning.
Philip Altbach, director of the Center of International Higher Education at Boston College, says any measurement of learning gain should take differences between elite and other kinds of institutions into account. “A hundred per cent of students who go to Harvard are very smart when they get in and they aren’t going to be stupider when they come out,” he says. “But how much they gain from it is less clear, and you could say the same for a lot of institutions in the UK.”
If finding robust measures of just what students have gained from university is problematic, holding institutions accountable for it creates even more issues. While no one could argue with measuring learning in order to improve professional practice, says Arum, “The question is, can you impose an accountability framework that doesn’t do more harm than good?”
This is something the government will have to tackle if it goes ahead with plans to allow institutions that can demonstrate high-quality teaching to raise the student fee cap in line with inflation. Yet even those most sceptical about how possible it is to measure what students have learned think it is worth a try.
Wolf says that an OECD-style multi-country test is not the answer. “Nor is it obvious how our own current government proposes to assess ‘teaching quality’. There is no evidence that they have begun to grapple with a near-impossible task.” If assessing learning is hard, assessing teaching quality is even harder, she adds. “I do, however, think that the question of how much people actually learn on degree courses is a major one, long overdue for serious attention.”