Why public engagement is not “research impact”

Measuring the impact of research, then. Seems a noble enterprise, at first glance. Nobody likes the idea of taxpayer-funded navel-gazing, so we obviously need to show that what we’re doing is useful, or informative, or at least interesting to a lot of people who aren’t us.

The REF won’t help to do any of that, of course. On current showing, it will probably just expend a titanic level of administrative energy pretending to turn subjective judgments into numbers. The numbers will not particularly represent anything, but will at least be reassuringly numerical. They will therefore be accepted as a substitute for actual insight — or, indeed, a means of defeating it.

On the general mess, I’ve got nothing to say that hasn’t been said better by Stefan Collini, James Ladyman, Ross McKibbin, Iain Pears and others. (One of the problems of voicing concerns as an academic is that the profession necessarily encourages keeping your damned mouth shut if you’ve got nothing original to say. Opponents can thus represent as an Awkward-Squad minority those who are in fact merely the most articulate exponents of a strong consensus. For the record, I don’t believe I know anyone in the academic humanities who seriously doubts that the “impact” principle is wildly incoherent and inherently corrosive. But you try writing to HEFCE or the Times Higher saying “I agree with Ladyman, and I’ve got more sensible hair than he has”, and see where that gets you.)

One point which I don’t think has been covered elsewhere, however, is this: The “research impact” agenda is pre-programmed to miss most of the useful work which humanities academics do for public audiences.

If I’m right, this is a serious problem. Check the marvel that is Annex J of the REF pilot report – which is the closest thing we’ve had so far to a concrete indication of how on earth this business is supposed to work – and you’ll find a heavy focus on two factors. Firstly, trade books (inevitably, as one of the few enterprises where the humanities generate anything you can turn into folding cash money); secondly, public engagement as traditionally defined.

Now then. By the standards of my (mainly research-oriented) group at Manchester, I do quite a lot of work for public audiences, directly and as an advisor. I think it’s an essential part of the job, and I rarely turn it down. Here are a few edited highlights of recent activities.

  • October 2009. Local tour guide asks me to fact-check a Darwin-themed walk. This principally entails finding evidence to nail a few misconceptions on our old friend the Science-Religion Conflict. These, note, are questions something anyone who teaches introductory hist of sci should be able to cover with minimal prep, but are nowhere near my research area.
    Query that pops up during this process: is it true that Darwin’s proposed knighthood was kyboshed by Church opposition? Thereby hangs a surprisingly complicated tale which I wasn’t able to unravel at the time, so I sought expert advice from a Serious Darwin Scholar. You may recall that it was impossible to get a minute alone with a Serious Darwin Scholar for love or money in 2009, such were the pressures of anniversary-themed lecturing, interviews and book-signing. In the end I cobbled together what I guessed was a reasonable historicist account and emailed it off with an “Is this right?” to the SDS I know best. A few days later the message came back with the electronic equivalent of a scribbled “Yup!”, which I duly forwarded.
  • May 2010. Contact at the British Council asks if I can advise on a Czech radio series about Manchester’s history and culture. I suggest various people who work in this area, one of whom (Terry Wyke) they end up using. However, they still want someone for a broad overview on science and technology. So I meet the producer and talk through some of the standard areas. Two topics make it to broadcast: early computers (which I know mainly via Campbell-Kelly and Lavington), and John Dalton’s atomic theory (for which, though I’ve skimmed the Greenaway, Cardwell and Thackray volumes at some point, I’m leaning heavily on the excellent synopsis in Bill Brock’s Fontana survey).
  • July 2010. BBC researcher contacts me about a Horizon on “the concept of one degree of temperature.” I suggest Hasok Chang: they’ve already got him. I also stress the brewery angle, suggesting my own eighteenth-century Boerhaavians and Otto Sibum’s work on James Joule, the latter of which ends up in the running order. They’re looking for someone to interview on camera: I doubt there’s much chance of their getting Otto, but tell them to try him first. It ends up being me. I accordingly swot up from the Cardwell biog before talking through it in detail with the producer. The finished product runs through Otto’s insight briefly in the narration, and has me in vision giving some very general background.

Now, what do we notice? Correct! It’s not my research. It’s not even my institution’s research, in most cases; and it may be ten, twenty or thirty years old (though it is, on every single occasion, new and interesting to the people I’m delivering it to). Any “impact statement” I’m obliged to write about my research is going to miss most of the PE element of the general argument for keeping me on the payroll.

Isn’t this merely an indication that that I should switch my research attention to fields that resonate better? No: that’s a recipe for fossilising the field. The world is fascinated by Darwin, yet it’s also glutted with Darwin scholarship (and believe me, some of the Serious Darwin Scholars find this more frustrating than anyone). The ideal researcher knows how to take those pre-existing interests, and use them to lead audiences on into areas they didn’t know they’d be interested in. We are very much in the hands of the mediators, here: usually, we don’t get to do this. Sometimes we do.

So why can’t we all just agree to focus on promoting our own particular research? Because the researchers responsible for a lot of important work tend to be busy, or thousands of miles away, or at least moderately dead. (The other scenario that often crops up is the one where a decent overview of the field must acknowledge the work of seven different authors who each revile each of the others with a homicidal passion. In this case, it’s often best to seek an integrated view from someone positively too junior to be on any of their radars.)

But do we need to take up an active scholarly researcher’s time on describing other people’s research? Yes! Otherwise the TV producers and schoolteachers and so forth will go off and find someone to talk to who will convince them that Thomas Henry Huxley invented the Breville sandwich toaster. (I exaggerate. Faintly.) What they need is someone who knows the literature: its shape, its direction, its controversies, its holes. And you can only know the literature to that level if you are, yourself, writing bits of it. Funding research in the history of science certainly does foster useful public work in the history of science – but usually not in the atomistic, linear fashion which the whole “impact” agenda insists is the only way anything ever gets done.

I should clarify that, while I was doing all the stuff above, I was also developing PE work specifically out of my own research, chiefly through Drinking Up Time. This work has not, as I write this, picked up anything like the audience levels of the examples above. Perhaps some of it will. Perhaps in sixteen years’ time (Otto’s Joule paper is from 1995). You certainly can’t plan this stuff, except at the broad aggregate level.

The problem goes deeper. Anyone concerned with “economic” as well as “social” “impact” should note that, if anything, the work we’re competent to do gains in measurable earning potential the further away it gets from useful new scholarship. Textbook example: textbooks. How much cutting-edge research do you think we can smuggle into a work whose very purpose is to introduce the established field? (Probably up to about 10%, if the author is mightily, mightily ingenious.) Bonus literary example: consider the standard thought-experiment for hard-headed application of soft scholars’ skills, namely the industrial signage text consultancy proposed (in passing) in Nice Work. The scholars in question could, arguably, have turned out superior signs through being researchers in English. This would in no sense have been an “impact” of the research they were doing when they weren’t signwriting.

The public role of conscientious humanities researchers is to disseminate, not the outcomes of atomised, cost-coded research projects, but the insights due to the whole of their professional experience and to that of the people they work with (most of whom, in my case, know more than I do). NB: this is not a plea for the right to be exceptionally woolly and floaty and expressive. It is a plea against randomly bashing bits of approaches to auditing together to produce a process that will “work” only in the tangible but unhelpful sense of reliably using up time and money.

So what would you do instead then, eh?

Well, on this issue, obviously, I’d target any attempt at auditing public engagement to the contribution of the research group, rather than the research. More generally, I’d bin the whole proposed edifice in favour of a national light-touch peer review system mapped to much smaller discipline areas and their overlaps, with measures to acknowledge and document the inherent subjectivity of the whole process as far as possible (minority reports, institution response statements). Why? What would you do?

This entry was posted in Uncategorized. Bookmark the permalink.

11 Responses to Why public engagement is not “research impact”

  1. alice says:

    I think you raise a really important point on how much engagement work done by academics is a matter of sharing other people’s research.

    I think this is just as true in the natural sciences, although maybe there is an arguement for saying academics in the arts and humanities should spend more time on this. Projects like ‘research blogging’ are an example that comes to mind.

    One argument is that communications professionals (press officers, engagement officers, teachers, filmmakers, journalists) do this work, but as with any research communications, I think it’s good for academics to do this too.

    We could argue that it is sometimes better to get another academic’s perspective on research, rather than going to the academic themselves. Perhaps because they can be more critical (c.f. the ‘post-publication peer review’ of the NASA arsenic life story) or simply because they are able to be really enthusastic about it withotu getting embarrassed.

    It’s also say it is good for academics – I certainly find that talking to people outside my university about my research field helps me as a researcher and a tutor, I’m sure this is true for others too. In particular, I think there is a strong link between this sort of public engament and teaching – we often think of engagement as a research issue, but I think it’s a way in which universaries can open up their teaching work too, and (as with research coms) find their teaching strengthened as a result.

  2. @stephenemoss says:

    The answer to the question of whether public engagement is or isn’t research impact may depend on who’s asking. As a scientist, I assume that those asking for the REF are specifically interested in our publicly funded research. Telling them about other people’s research would probably fail to cut the ice. In fact, there is little reason why those who distribute public funds for research would concern themselves with work undertaken by academics that does not use those funds.

    I should have prefaced these comments by saying that I view this whole process as a colossal waste of time and resources.

  3. The important thing to remember here is the model of impact that HEFCE are thinking of. HEFCE – and all the research councils made big promises to the government before the 2000 White Paper.

    The government has invested tens of billions of pounds in research, science and technology, so there needs to be a commensurate return on that investment.

    So impact is just a way of demonstrating the billions of pounds of value produced.

    The ‘easiest’ (in the bureaucratic mindset) way to capture that is to come up with inventions that change the world – like the MRI scanner and estimate the value of a few successful inventions.

    The MRI scanner model has particular salience here, because everyone in government regrets never getting the returns on the research done on that at the University of Nottingham

    So what they want to do is find these world-changing discoveries that have ‘big ticket’ returns, and only want to find enough to justify the investment.

    The issue for arts and humanities research is that very little of this ‘big ticket’ research is done, and very few people are doing it – the Serious Darwin Scholars you mention being an example.

    To their perspective, what public engagement is, is merely equivalent to the person who screws the light bulbs into the MRI scanner – not really ‘impact’ because it did not make the scanner possible.

    I do not for one second believe that this is a sensible view of the world, but unfortunately it is a view of the world that currently prevails.

    They certainly do not have the slightest care for the effects that chasing this impact of those people who are going to find their work in engagement downgraded and reduced to the status of manual labour.

    But unless you challenge its underlying causes, and to my mind a key moment here is the Faustian pact of 2000 with HM Treasury, then even with the best intellectual arguments in the world – and this post is certainly one of them – you are not going to be able to meaningfully challenge the impact agenda.

  4. Tom Whyntie says:

    For a scientist’s perspective on “impact”, Peter Cole’s blog is worth a read; for example, this post makes some good points.

    As I think you suggest, the debate about “impact” is pretty meaningless without actually defining what “impact” actually is (I don’t have a clue). But I agree with Alice — you’ve raised an excellent point about the issue of talking about other people’s research.

    For me, the (necessary) distinction is quite straight forward: if you weren’t involved in the original research, you’re acting as a science/humanities communicator. And there’s nothing wrong with this — as you point out rather nicely, there are many benefits to doing this. The problem, I think, is that universities/departments don’t see communication as an intrinsic part of the research process. I believe it is, and that’s what needs addressing by the REF.

  5. alice says:

    p.s. Someone reminded me of the recent Concordat for Engaging the Public with Research I wonder if this event is interesting/ relevent/ a place to raise such concerns.

    p.p.s. Sorry for the spelling/ grammer in the above comment. I promised myself I’d comment before I went for lunch, and hunger does not necessarily equal coherence.

  6. Jamie says:

    Firstly, I think you’ve raised a really important issue, and one that needs thinking about at length.

    A little rushed, but, just pick up on Alice’s comment: it also occurred to me that a lot of engagement gets done by natural scientists that wouldn’t be considered by any REF analysis.

    But, I think most of that outreach sits much more closely with the work that scientists do and, as a proportion, counts for little of the overall impact created through their man hours. I’m not saying it’s trivial, just that I think it’s *far* more of an issue for those in the humanities.

  7. James says:

    Interesting to read the comments from people concerned with science policy. My discussion above addresses only the humanities because that’s all I can talk about from personal experience.
    There’s a distinction worth emphasising here. The “impact” concept was first defined around the assumption that research routinely creates intellectual property with money-making potential that can be realised in ways that are easy to audit. That assumption holds pretty well in many (not all) areas of maths, the sciences and engineering. In those fields, the impact agenda looks fairly coherent (though what it coheres into may tend to be bad news for good researchers).

    For the humanities, by contrast (and in areas of the social, and occasionally the natural sciences) the criteria as originally proposed just make no sense whatsoever, as Collini’s 2009 piece explains. Research impact is not a thing.

    Public engagement, on the other hand, is a thing, and is a thing which humanities scholars have a good reputation for doing in ways that sizeable audiences find worthwhile. You know: Historians on the telly! Engaging young people through music! Learning from and with bilingual citizens! All That Stuff. An approach thus presents itself: stick some public engagement work in the impact box. This will look nice and responsible when written up in the case study format, in contrast to accounts of humanities trade book sales, which just look desperate.

    Part of my point in writing, then, was gently to point out that the impact agenda is not a golden opportunity to get more PE onto the agenda. It has co-opted PE in a highly arbitrary fashion, and its effect, in skewing our PE efforts towards individual, narrowly defined projects, may well be to exclude most of the engagement opportunities that research-active scholars are best placed to take up.

  8. James says:

    Incidentally, here’s David Willetts’ speechwriting collective in reassuring mode, yesterday:
    “HEFCE has since piloted [impact] across several disciplines. The REF Panel on English Language and Literature was – by all accounts – one of the star turns in the pilot exercise.”
    He got that right. The report from the Eng Lang and Lit panel (the abovementioned Annex J) is a bravura prose exercise. It goes a long way towards defining a realisable process that could appear to serve the bizarre and incoherent underlying plan without much visible inconsistency. It’s the kind of stuff I hope I’d be up to the task of writing were I forced to.

  9. Pingback: People, not papers: rethinking ‘impact’ | Responsible Innovation

  10. Pingback: KABOOM: Exploding ‘impact’ « through the looking glass

  11. Pingback: KABOOM: Exploding ‘impact’

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>