The academic quantified self

Screen shot 2013-11-06 at 23.32.44

There’s a great post on Deborah Lupton’s blog in which she discusses the academic quantified self. It builds on a recent paper by Roger Burrows, Living with the h-index, in which he explores the implications of the increasing quantification of academic labour for the structures of feeling prevalent amongst academics. I’ve attached a talk Roger gave on the topic underneath the extract from Deborah’s post.

Engaging as a public sociologist using digital media invariably involves some form of quantifying the self. Roger Burrows has employed the term ‘metric assemblage’ to describe the ways in which academics have become monitored and measured in the contemporary audit culture of the modern academy. As part of configuring our metric assemblages, we are quantifying our professional selves.

Academics have been counting elements of their work for a long time as part of their professional practice and presentation of the self, even before the advent of digital technologies. The ‘publish or perish’ maxim refers to the imperative for a successful academic to constantly produce materials such as books, book chapters and peer-reviewed journal articles in order to maintain their reputation and place in the academic hierarchy. Academic curricula vitae invariably involve lists of these outputs under the appropriate headings, as do university webpages for academics. They are required for applications for promotions, new positions and research funding.

These quantified measures of output are our ‘small data’: the detailed data that we collect on ourselves. Universities too engage in regular monitoring and measuring practices of the work of their academics and their own prestige in academic rankings and assessment of the quality and quantity of the research output of their departments. They therefore participate in the aggregation of data, producing ‘big data’ sets. The advent of digital media, including the use of these media as part of engaging in public sociology, has resulted in more detailed and varied forms of data being created and collected. Sociologists using digital media have ever greater opportunities to quantify their output and impact in the form of likes, retweets, views of their blogs, followers and so on. We now have Google Scholar, Scopus or Web of Science to monitor and display how often our publications have been cited, where and by whom, and to automatically calculate our h-indices. Academic journals, now all online, show how often researchers’ articles have been read and downloaded, and provide lists of the most cited and most downloaded articles they have published.

I’m curious as to how attitudes towards metrics differ between career stages. In my own case, it often confuses me that my hostility towards audit culture can co-exist with a strange affection for google scholar. I still find it exciting to discover that my work has been cited. It confirms people have read it and that it made a sufficient impact for them to choose to reference it. Google Scholar automatically indexes these citations for me, thereby avoiding what would feel like a depressingly narcissistic exercise of searching for them myself. It’s a service that has existed for longer than I have been publishing academic papers. So what did you do before google scholar? Simply wait until you stumbled across references to yourself or an acquaintance noticed them and pointed them out to you? The idea seems incredibly strange to me. Has a PhD in the current climate already shaped my professional self to cohere with what Burrows calls the moment of the metrics?

At what point can we begin to date the emergence of the significant metricisation of  the UK academy?  It is the contention of this paper that the marked change in academic sensibilities invoked by new forms of measure only really begun to take  hold in a significant manner within the last decade or so. There has been a slow accumulation of layer upon layer of data at various levels, scales and granularities since the mid‐1980s, until there came a point of bifurcation; a point at which structures of feeling in the academy were irretrievably altered by the power of these numbers. The ‘bygone world’ identified by Lock and Martin (2011) was the academy  of the 1950s through to the early or perhaps even the mid‐1990s. As De Angelis and  Harvie (2009: 10) have observed, academic accounts of the life‐world of the post‐war  University, as contained within studies such Halsey’s (1992) Decline of Donnish Dominion and even Slaughter and Leslie’s (1997) Academic Capitalism, concur that ‘measure in any systematic form, with accompanying material consequences…[is]…  new. Measure, as we would now recognise it, simply did not exist in the post‐war university’.

So the ‘moment of the metrics’ – the point of bifurcation – could be thought of as the beginning this century; the point at which academics could no longer avoid the consequences of the  developing systems of measure to which they were becoming increasingly subject. Crudely, at some point between the Research Assessment Exercise (RAE) carried out  in 1996 and the one conducted in 2001 (Hicks, 2009; Kelly and Burrows, 2012) the moment of the metrics occurred and the structures of feeling that have now come to dominate so much of academic culture began to take hold.

As discussed elsewhere (De Angelis and Harvie, 2009; Kelly and Burrows, 2012) the  life‐world of the university is now increasingly enacted through ever more complex  data assemblages drawing upon all manner of emissions emanating from routine  academic practices such as recruiting students, teaching, marking, giving feedback, applying for research funding, publishing and citing the work of others. Some of  these emissions are digital by‐products of routine transactions (such as journal  citations), others have to be collected by means of surveys or other formal data  capture techniques (such as the National Student Survey (NSS)) and others still  require the formation of a whole expensive bureaucratic edifice designed to assess  the ‘quality’ of research (Kelly and Burrows, 2012).

As Angelis and Harvie (2009:17‐24) note, different metrics operate at different scales: some at the level of the individual; some at departmental, school or faculty level; some at institutional level; some at national level; and some at international level. However, all are nested or folded into each other to form a complex data assemblage  that confronts the individual academic. One could, for example:  have an individual  h‐index (see below) of X; publish in journals with an average ‘impact factor’ (see  below) of Y; have an undergraduate teaching load below the institutional norm; have a PhD supervision load that is about average; have an annual grant income in the top quartile for the social sciences; work within an academic agglomeration with a 2008 RAE result that places it within the ‘top 5’ nationally; receive module student evaluation scores in the top quartile of a distribution; work within a school with  ‘poor’ NSS results, placing it in the bottom quartile for the subject nationally; teach a subject where only Z per cent of graduates are in ‘graduate’ employment 6 months after they graduate, earning an average of just £18,500; work within a higher education institute that is ranked in the ‘top 10’ nationally in various commercially driven ‘league tables’, and within the ‘top 80’ globally, according to others.

It would be quite easy to generate a list of over 100 different (nested) measures to which each individual academic in the UK is now (potentially) subject. However, for our purposes here, we will consider just six domains: citations; workload models; transparent costing data; research assessments; teaching quality assessments; and university league tables. Through the discussion of these we will be concerned to begin to decipher common themes and points of difference in the origins, functioning, effect and affect of these different measures. To be sure, all of our examples were developed at different points in time, by different organisations and for different purposes: citation measures and league tables have been developed by private companies, publishers and newspapers to promote sales and revenue  streams; the transparent costing system was developed by Treasury in order to ensure no cross‐subsidisation between teaching and research; research assessment exercises were initiated by civil servants but then ‘domesticated’ by funding agencies and scholarly bodies and associations; and the measurement of student satisfaction was prompted by politicians concerned to promote a greater level of consumerism amongst students (and their parents) in the higher education system. However, there are commonalities in that all give an emphasis to numeric representation, order and rank, all focus on the ʹmeasurableʹ, and all appear to have an interest in promoting competitive changes that alter numbers and ranks over time. The crucial thing though is that together they are all now experienced ‘on the ground’ as a more or less ubiquitous melange of measures – full of legacy tensions and contradictions to be sure – that increasingly function as a overarching data assemblage orientated to myriad forms of quantified control; an assemblage that the enactment of which invokes the sorts of affective reactions enumerated at the outset of this paper.

This does suggests a possible generational divide between those who made the transition into the moment of the metrics and those who entered an already metricated academy. Does the nature of resistance to quantified control vary in each case? I hope I never take these metrics too seriously but I realise that I struggle to imagine ever ignoring them all together. The depressing thought is that I struggle to imagine not being interested in them. Deborah asks some important questions which relate to this at the end of her post:

Should the practices of quantifying the academic self be considered repressive of academic freedom and autonomy? Do they place undue stress on academics to perform, and perhaps to produce work that is sub-standard but greater in number? However it is also important to consider the undeniable positive dimensions of participating in digital public engagement and thereby reaching a wider audience. Academics do not write for themselves alone: being able to present their work to more readers has its own rewards. Quantified selfers can find great satisfaction in using data to take control over elements of their lives and also as a performative aspect. So too, for academics, collecting and presenting data on their professional selves can engender feelings of achievement, satisfaction and pride at their accomplishments. Such data are important to the academic professional sense of self.

Categories: Digital Sociology, Higher Education

Tags: , , , , ,

5 replies »

  1. Hi Mark

    Thanks for this. I recently received a comment by an Australian colleague on the blog post you cite above, asking how any ‘pleasure’ could be engendered by the quantified academia. This is my response to him that I posted on the blog:

    “Most people, including academics, find satisfaction from positive feedback about their work. This may include the number of citations our work has received, the assessments our students give us, the number of followers we have on Twitter or readers of our blogs, for example. In this sense, there is pleasure to be gained from measures of self-quantification. I would also argue that some of these measures can be important for academics who have had an interrupted career trajectory or who are from marginalised social groups to demonstrate their ‘worth’ when applying for jobs and so on in ways that are viewed as objective by those who assess them. They thus can be powerful bargaining chips in cases where discrimination may otherwise occur.”

    I am discussing this further in my Digital Sociology book. There are many interesting issues here to explore about the ambivalences of quantifying the academic self.

    As for your question about how one measured one’s impact in pre-digitised days, well there were paper equivalents of Web of Science that were kept in university libraries as large annual volumes, and one could look up one’s citations in these. But of course they took ages to appear and excluded most humanities and social sciences journals. As far as I am concerned, Google Scholar is a marvel, and I have used my citations (and yes, my h-index) to my advantage as much as I have been able!


    • Thanks Deborah this is really interesting. I wonder how widespread these experiences of pleasure are – I sometimes wonder if there’s a performative aspect to the disavowal of experiences like this. Zizek’s notion of cynicism often comes back to me when thinking about this – people subjectively disavowing a structure while nonetheless behaving in a way which renders them objectively complicit in it.

      “and I have used my citations (and yes, my h-index) to my advantage as much as I have been able!”

      I remember chatting to you about this briefly on twitter some time ago. One of the things I really like about Roger’s H-Index paper is the historicisation of the ‘moment of the metrics’ – it’s an interesting frame of reference within which to think about how ‘quantified control’ and quantified influence relate to older forms of prestige and power. I’m not generally a fan of Bourdieu but it seems like something that could be done in a wonderfully incisive way using a bourdieusian approach – do you know of anything?

      • We academics are terrible cynics (and quite often hypocrites too …). It’s true we are embroiled in a moment of metrics, but academics have been metricised for a long time, even if it just meant displaying how many articles and books etc they had published on their CVs as a measure of worth. Not sure where Bourdieu’s work on power could be used here, as I have tended to draw principally on his habitus stuff (although that is relevant to notions of the academic habitus as a metricised assemblage!).

Leave a Reply

Your email address will not be published. Required fields are marked *