Esmeralda Bon shares her thoughts on measuring impact. In this blog post, she explores what measuring impact means. In future posts, she will look at non-academic impact and specific considerations around impact for early career researchers.
As academics we seek to contribute to knowledge and the understanding of phenomena by doing research. This also tends to be a requirement of our funders.
However, in relation to the impact tracking and metrics debate (#CiteTheData), we may ask how can, do and should we measure impact, especially early in our career?
How can we measure impact?
Assuming that making an impact is somewhat similar to having an effect, we should, theoretically, be able to measure impact like we measure effects. Effects research tends to require an experimental or quasi-experimental design, in which it can be proven that a cause X has affected a consequence Y.
However, unlike the conducting of some of our research, the reception of our research takes place outside of a lab. This is problematic because this suggests that we can never be sure of our ‘real world’ impact.
Wikimedia Commons CC BY
How do we measure impact?
Luckily, there are indicators available, both qualitative and quantitative.
Qualitative examples that spring to mind are a written acknowledgement or an email from a peer. Quantitative examples are, in turn, citation, view and download metrics and it is these metrics I tend to rely on when choosing which research to check out and include.
After all, we could say that this is an easy and relatively quick way of finding out what research is getting attention. Examples of the citation metrics I personally tend to use are Google Scholar and Altmetrics.
Geralt on Pixelbay CC BY
How should we measure impact?
Comparing these two different types of feedback of impact, the qualitative would seem to be of greater value. It takes more effort to read a piece of work and write a piece of text. Furthermore, to what extent can we rely on quantitative metrics to tell us whether an article has actually been read, whether it has left an impact?
After all, sometimes academics simply cite to cite, to indicate that they are aware of research in the field and to provide context of the study presented. In addition, impact factors themselves may not adequately serve as impact indicators (e.g. Vanclay, 2011).
On a more general level one may ask whether it is actually possible to quantify impact. For example, if an aspiring PhD candidate uses one article as the backdrop and inspiration of his, or her, PhD, does this mean that the article has had less impact than if it had been cited by 10 different scholars?
Scholars generally acknowledge that metrics do have flaws, but still generally trust them for reading and publishing (Tenopir, 2014).
So, it is key that we, as early-career researchers, also do not forget that these metrics do not provide any insight into the actual impact made. These metrics could be a starting point for investigating the impact of a piece of research or data set, but should never become the beginning and end.
References (APA style)
Tenopir, C. (2014). Trust in reading, citing and publishing. Information Services & Use, 34, 39-48. doi:10.3233/ISU-140725
Vanclay, J.K. (2011). Impact factor: outdated artefact or stepping-stone to journal certification? Scientometrics, 92(2), 211-238. doi:10.1007/s11192-011-0561-0
Esmeralda Bon, @EsmeraldaVBon is one of our UK Data Service Data Impact Fellows for 2017. Esmeralda is an ESRC-funded PhD student in the School of Politics and International Relations at the University of Nottingham, in collaboration with the Committee on Standards in Public Life (CSPL), an advisory non-departmental public body sponsored by the Cabinet Office. Esmeralda is affiliated with the Centre for British Politics and the Nottingham Interdisciplinary Centre for Economic and Political Research (NICEP).