Ever since university tuition fees were converted to become income contingent repayable loans, we have witnessed the growth in mechanisms to survey students about their experience. The National Student Survey (NSS), International Student Survey (ISS) and The Times Education Student Experience Survey to name just a few.

Why so many? Could it be because there is no clear definition of the term ‘student experience’? The term is ambiguous and the student’s opinion (whether good or bad) is influenced by so many external but contributory factors such as having a good student union or a happening social scene to how structured the course was. For every student prioritising course structure and delivery, there will be another prioritising facility and still another prioritising industry connection. Admittedly, all the surveys, on the surface, seem to cover the same general areas where there are subtle differences in the questions that promote a wide variety of responses.

The lack of synergy between surveys presents institutions with a real dilemma. Which survey, which set of student responses truly represents how things really are at their institution? Why should one set of responses carry more merit than another? Here is the real conundrum for institutions, even if it’s possible to rank surveys in order of importance: how does the institution rate one set of questions or set of responses as being more important than another? Is it realistic and practical to expect universities and colleges with constrained budgets to invest in developing a more lively and exciting social scene over academic staff or teaching space to ‘improve’ the student experience?

It's clear that as budgets tighten, institutions need to think carefully about where to invest, and student surveys are far from being the clear indicator of where to invest. In fact, some in the sector would argue that they have actually ‘muddied the waters.’ Others would argue that such surveys were never designed to be a measure of how good an institution is. In fact, the NSS describes the benefits of its survey in two ways: one - for presenting current students with a ‘picture’ of what their learning experience was like that year, and two - to assist prospective students to make an ‘informed’ choice about what and where to study. And yet in my experience, one of the most common topics of discussion and the conundrum taxing the minds of managers and administrators is “how can we improve our NSS score and where should we invest to make a difference?”

Could it be that the results from surveys like the NSS have become the unofficial Higher Education (HE) league table, a purpose for which they were never intended?