Measure for measuring’s sake?

Posted on February 12, 2013 by

0


This blog was written by Dr Liz Allen, Head of Evaluation at the Wellcome Trust, and  originally appeared on the Wellcome Trust blog.

Learning about what works and what doesn’t is an important part of being a science funder. Take the Wellcome Trust, for example: we have multiple grant types spread over many different funding schemes and need to make sure our money is well spent. Evaluating what we fund helps us to learn about successes and failures and to identify the impacts of our funded research.

Progress reporting while a grant is underway is a core component of the evaluation process. Funders are increasingly taking this reporting to online platforms, which come with the ability to easily quantify outputs, compare funded work and identify trends.

As useful as these systems are, they come with an inherent danger: oversimplifying research impact and simply reducing it to things we can count. At the Trust, our attitude recalls a quote attributed to Albert Einstein: “Everything that can be counted does not necessarily count and everything that counts cannot necessarily be counted.” Including qualitative descriptors of progress and impact, alongside quantitative data, is integral to our auditing and reporting.

Some quantifiable indicators, such as bibliometrics, do help to tell us whether research is moving in the right direction; the production of knowledge and its use in future research, policy and practice is important. However, it’s not about how many papers have been published or how many patents have been filed – it’s about what’s been discovered. Without narrative and context alongside ‘lists’ of outputs, how can you know whether research is making a difference?

While different funders place their emphasis on different things when deciding which research to fund, as a sector we need to be responsible and avoid the creation of perverse incentives that distort the system. If funders send out the message that what’s important is a lot of papers or collaborations, then those seeking our funding will tell us that they’ve produced a lot of papers or collaborations.

The research community needs to be pragmatic in moving the field of impact tracking and evaluation forward. We need to develop better qualitative tools to complement more established indicators of impact – traditional bibliometric indicators, such as citations, can now be complemented with more qualitative tools such as those provided by F1000 Prime. The Trust is also exploring the value that altmetrics can bring. Other channels for the dissemination of research, such as Twitter, are becoming increasingly popular among researchers, and it’s important that we understand their role.

Most of all, we should not forget why we fund research: to make a difference. In gathering reports from those we fund, we should encourage openness and the sharing of successes and failures alongside the products and outputs of research. At the core of evaluation is learning: how and when does research lead to impact, and how we might use our funds more effectively? A research impact agenda that encourages funders to merely count is clearly at odds with that.

Tagged:
Posted in: Policy