The rise of altmetrics: Shaping new ways of evaluating research

Ema Pavlović explores developments in approaches to measuring the impact of research.

 

 

 

 

Researchers often feel under a lot of pressure to publish (or perish). They might be scared that no one will even consider them for a grant or a position if they compare themselves to peers with years of experience. The impressive h-indexes of others might frighten them, no matter what research is about and how it could impact their field or the society.

Though, researchers’ fear relies on traditional research metrics. Now, times are changing when it comes to assessing research. New metrics emerged, as well as new tools of measuring research impact.

In this article, we will give you an overview of traditional and alternative metrics, tools to measure them, as well as provide you with an insight of their limits. We will also show how important it is for researchers to promote their expertise in non-traditional ways.

 

The complex nature of R&D impact

Defining the impact is a challenge of its own. Thus, there is an entire field of study dedicated to the importance of assessing the (scientific) research impact called scientometrics, as a sub-field of bibliometrics. Scientometricians, researchers and research administrations have been very vocal about the ways researches are rated over the last two decades.

“Too often people think too narrowly about what ‘impacts’ can mean.”

as Sir Phillip Campbell, the editor-in-chief of the Springer Nature says.

In Campbell’s research paper from 2005 about the Journal Impact Factor, he emphasised how research impact is

“a multi-dimensional construct that cannot be adequately measured by any single indicator.”

Because his work in Springer Nature is focused on research from across all the disciplines, directly or indirectly related to societal challenges, and the themes of the UN’s Sustainable Development Goals, Campbell prefers the term research impact over scientific impact.

As Campbell says,

“The latter language is perceived by some to exclude the vital contributions of the humanities and social sciences, for example, there are many paths by which research can impact and influence other research and activities outside research.”

And that is exactly the reason why many governments and funders are trying to include more alternative ways of assessing research. There are several questions to be answered: What has changed in research metrics over the years? How can we measure the ways of complementing quantitative metrics? And is it possible at all?

 

A brief history of research metrics

Since the 1960s and Eugene Garfield who defined the Science Citation Index (SCI) in 1964, research metrics have focused on counting the citations from a research article published in journals. However, until the late 90’s the validation of a research paper by peer-review was sufficient to promote the quality of it. Peer review was a binary method of evaluation: good vs. not-good.

The advancement of technology and the introduction of automation made counting citations easier and more frequent. It started with the launch of the Web of Science database in 2002, followed by Elsevier’s Scopus and then Google Scholar, both in 2004. From there, research evaluation shifted from binary to more balanced rating. This shift brought a lot of concerns to the academic community, especially with respect to allocating research funds.

Jan Hrušák, Chair of the European Strategy Forum on Research Infrastructures, explains:

“The wrong use of the metrics, as a decision supporting instrument, leads to the present crisis of reproducibility in science, is enhancing tendencies for scientific misconducts by putting too much emphasis on formal achievements with lacking quality control.”

Up until recently, all eyes were on just a few indices which became the fundamental ways of assessing research. Although it is clear now that they can’t actually indicate the quality of research and therefore its impact, they are still relevant in many aspects. What are these traditional indices and why they still matter?

 

The impact factor obsession

Over the last decades, numerous studies tried to figure out which one of the traditional bibliometrics is the most accurate to measure research impact.

The most popular way to assess the researchers’ impact has been to calculate the h-index. Jorge Hirsch, a physicist at the University of California in San Diego, defined it in his paper “An index to quantify an individual’s scientific research output” in 2005. He introduced the h-index as “the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher.”

The h-index started a whole “impact-factor obsession” era.

Recruiters started asking candidates for their h-index, and pressure was put on PhD students to publish in high-impact journals to gain additional, external funding for their research.

Also as Diana Hicks, Paul Wouters, Ludo Waltman, Sarah de Rijcke & Ismael Rafols state in Nature in 2015, “universities have become obsessed with their position in global rankings such as the Shanghai Ranking and Times Higher Education’s list,”

That comment would later become known as the “Leiden Manifesto for research metrics” and the basis for new policies on assessing research that we will mention later on.

However, the Impact Factor is primarily related to journals, meaning the average number of citations received by articles in a journal within a two years frame. It is important to stress that citation rates vary between research fields.

For example, life sciences have more citations than social sciences. There are several factors that contribute to this, such as the countries in which researchers work and publish, their age and academic status.  Also, there is a difference in approaches to reviewing and citing life sciences, especially medicine, compared to the social sciences. As stated in the Leiden manifesto:

“Top-ranked journals in mathematics have impact factors of around 3; top-ranked journals in cell biology have impact factors of about 30.”

Image: Tools for calculating Journal Impact Factor, copyright Labs Explorer, reproduced with permission (click/touch to enlarge) 

 

The logic behind “impact-factor obsession” goes like this: if I publish my research in a journal with a high Journal Impact Factor, that means that my research will have more impact than if it was published in a low-impact journal. The higher the Journal Impact Factor, the higher the impact of an individual research will be. But is that really true?

 

How influential are you as an individual researcher?

Being a researcher in a culture of measurement is very challenging. Jan Hrušák, Chair of the ESFI says that

“the influence of a researcher is given by his/her position in the corresponding community.”

Thus, he stresses how “quality, quantity and impact must be assessed from different perspectives and cannot be projected on a linear scale.”

Indices for author-level metrics (g-index, h-index, i10-index)

Image: Indices for author-level metrics, copyright Labs Explorer, reproduced with permission (click/touch to enlarge) 

 

According to the study “Quantifying the Impact and Relevance of Scientific Research“, conducted by William J. Sutherland, David Goulson, Simon G. Potts, Lynn V. Dicks in 2011, “there is a weak positive correlation between our impact score and the impact factor of the journal.”

But, when measuring the impact of an individual researcher, Campbell states “it is inappropriate to consider only the journal in which they have published.”

And still, there are numerous indices that are trying to do so. The h-index, as already introduced, inspired many others in an eternal quest of finding the accurate research impact assessment. It was followed by Leo Egghe’s g-indexproposed in his paper “Theory and practice of the g-index” in 2006 as “an improvement of the h-index by giving more weight to highly-cited articles.”

With the launch of Google Scholar in 2004, the i10-index was created as “the number of publications with at least 10 citations.” However, it can be only applied to Google Scholar, using Google’s My Citations feature. And yet, none of these indices managed to answer the question: How do we accurately measure the global research impact including in the outer space of the academic world?

 

Altmetrics – a subsidiary of traditional metrics or a shift in research evaluation?

During the last two decades, it has become clear that we cannot look only at the count of citations.

The assessment of the research impact should also include posts on social media, policy documents, Wikipedia pages, mentions, etc. Still, as with traditional research metrics, the challenge of answering certain questions remains. How do we measure the impact beyond academia and how do we measure influence outside the academic community?

Alternative metrics have been given more attention during the last ten years. First, the   Mendeley is an online reference manager and social network for academics which enables them to share research and collaborate with others. It includes several possibilities of tracking research impact.

A shift happened – going from citation-based metrics to other alternatives which include measurement of impact beyond academia.

The term “altmetrics” was proposed by Jason Priem, Dario Taraborelli, Paul Groth, Cameron Neylon in their “Altmetrics: A manifesto” from 2010. Their approach was different and focused on calculating “scholar impact based on diverse online research output, such as social media, online news media, online reference managers and so on.” By doing so, authors believed altmetrics “demonstrate both the impact and the detailed composition of the impact.”

 

How can marketing help improve research rank?

Since then, the term altmetrics has been adopted for metrics that include impact assessment beyond academia and citation counts. With altmetrics, the scientific ommunity became more aware of the benefits of content, social media and digital marketing,.

One of the most known companies dedicated to calculating alternative metrics is logically, called Altmetric. The company was founded in 2011 by Euan Adie with the mission to track and analyze the online activity around scholarly literature, not only for individual researchers but also for institutions, publishers and funders.

Sarah Condon, marketing director of Altmetric company:

“It collates what people are saying about published research outputs in scholarly and non-scholarly forums like the mainstream media, policy documents, patents, social networks, and blogs to provide a more robust picture of the influence and reach of scholarly work.”

They monitor a range of non-traditional sources, searching for links and references to published research since 2012.

“Today the Altmetric database contains 124.6  million mentions of over 27.8 million research outputs tracked (including journal articles, books, datasets, images, white papers, reports and more), and is constantly growing,” concludes Sarah Condon.

If we apply marketing principles to scientific dissemination, research is the product and funders are the customers. Customers need to know about the product if you want to gain financing. How do you achieve that?

There are infinite options for promoting your research online. Having your own website or, at a minimum, active and updated social media profiles can do wonders. Like that, researchers can easily connect with influencers in their domain, expand their network and make sure their work reaches a broader audience. They can even invest in social media advertising campaigns or set the audience they want to reach in a certain timeframe, benefiting from an event for example.

Scientists start to work with experts in communication and marketing to make their research and publications stand out of the crowd. For example, Labs Explorer specialises in scientific dissemination and content marketing as a marketing service provider for R&D teams. For years now, they have become experts in supporting scientists and CROs in their communication efforts.

 

Altmetrics tools

As there are many tools for measuring the quantitative impact of research papers and journals, there are now new tools emerging which attempt to measure qualitative impact as well.

Tools to evaluate research impact of scientific publication with altmetrics and what they take into account

Image: Tools to evaluate research impact of scientific publication with altmetrics and what they take into account, copyright Labs Explorer, reproduced with permission (click/touch to enlarge) 

 

A list of apps that measure alternative metrics can be found alongside the mentioned “Altmetrics: A manifesto.” Here is a list of the most popular altmetrics tools right now:

  • Altmetric– One of the leading companies that provide comprehensive impact measurements, not only for individual researchers but also for institutions, publishers and funders. They include traffic on social media caused by the publication of a certain work. That means they will look from “patents and public policy documents to mainstream media, blogs, Wikipedia, and social media platforms.”
  • PlumX– Plum Analytics was founded in early 2012 with the vision of bringing modern ways of measuring research impact to individuals and organizations that use and analyse research. Since 2017, they have been a part of Elsevier. They have an embeddable widget for live tracking of altmetrics.
  • ImpactStory– an open-source, web-based tool where professors can add an ImpactStory widget on their own websites to get live altmetrics for papers and other research products. They are sorting metrics by engagement type and audience.
  • ResearchGate– beside publishing, sharing and commenting research, this platform can calculate a RG Score which is “scientific reputation based on how work is received by peers.”
  • edu– a platform where researchers create profiles, upload papers and track readership and use.
  • SymplurRank– specialised algorithm for measuring influence and content in the field of healthcare.
  • Dimensions– a research insights platform which offers the Dimensions Badge tool that enables researchers to showcase the count of citations that their publications have received, both for individuals and organisations.

 

How do governments and institutions adapt to altmetrics?

Sir Phillip Campbell monitors this shift from citation-based metrics to altmetrics.

“There is now a substantial movement away from this narrow practice,” he stresses the importance of this because “it is an enormous challenge for universities and funders to avoid being overly influenced by impact factors when assessing submissions for hiring, promotion, grants or publications.” From his own experience, “some papers in low-Impact Factor Journals can have a considerable societal impact e.g. in a particular country.”

The San Francisco Declaration on Research Assessment, known as DORA, sets out a series of recommendations to “improve research assessment, emphasising the quality of scientific output over the reputation of scientific journals.” It was initiated in 2012 by researchers from the American Society for Cell Biology. DORA recommends abandoning the Journal Impact Factor and journal-based metrics to focus on “new, qualitative research impact indicators such as influence on policy and practice” instead.  Its recommendations are for funding agencies, researchers, publishers and institutions.

Up to now, 1,893 organizations and institutions signed the DORA declaration. For example, The French National Research Agency (ANR), a public administrative institution under the authority of the French Ministry of Higher Education, Research and Innovation. The Association of Universities in the Netherlands relied on the DORA for their new Strategy Evaluation Protocol for the upcoming six years. And Finland followed this practice in their new Recommendation for the responsible evaluation of a researcher in Finland.

Another initiative came from a previously mentioned group of authors Hicks, Wouters, Waltman, de Rijcke and Rafols. In April 2015, they published 10 principles to guide research evaluation, known as the Leiden Manifesto.

 

Source: Vimeo/ The Leiden Manifesto for Research Metrics

 

The Independent Review of the Role of Metrics in Research Assessment and Management published, a 180-page document entitled “The Metric Tide: Final Report with executive summary” in 2015, recommending ways of evaluating research with more objectivity.

Global institutions also publish reports with complex assessments of R&D institutions using several indices. For example, the Shanghai ranking uses citation-based metrics when evaluating the impact of universities. Specifically, its ranking criteria includes “Highly Cited Researchers” under the Quality of Faculty section. It also looks at papers indexed in the Science Citation Index-Expanded and Social Science Citation Index. Both of them weigh 20% in total ranking score and are retrieved from the Web of Science.

However, it is becoming more common that institutions measure their external impact with the help of alternatives to traditional metrics. One of the ways to do so is with Altmetric for Institutions, a service which measures the impact an institution has on public policy or the way it is “Encouraging collaborations”. The tool can also drill down to inter-departmental groups or researchers from complementary fields.

 

Citation-based metrics vs. altmetrics – should we choose between them or combine them?

Some people argue that relying exclusively on quantitative metrics or scientometrics has led to a “Publish or perish” principle. This term is used to describe the pressure put upon scientists and researchers to publish their work in order to have a successful career.

One shortcoming of citation-based metrics is that they usually take a long time to be obtained. That means waiting for two years, before finding out information about a publication based on the Journal Impact Factor. On the other hand, this kind of metric makes a huge difference when it comes to applied disciplines, such as clinical medicine. It is also important to stress how it cannot be compared properly among different disciplines. A cultural anthropologist will never have an h-index which is comparable to the one of a physicist.

Altmetrics aren’t a true substitute for citation-based metrics, but they can “point to work of public or other societal interest,” according to Sir Phillip Campbell. He believes we should consider altmetrics as an “unreliable indicator of a type of potential interest, and not a reliable quantitative measure of total interest, academic interest or societal impact.”

The trend for altmetrics highlights the urge for scientists to disseminate beyond the academic community: to industries, to society, and more. This requires creating specific content like explainer videos, running social media campaigns, sponsoring content in non-scientific journals.

In the end, as Campbell says,

“There is no substitute for people looking at researchers’ outputs of all types, and responses to those by researchers and others, and forming a qualitative human judgement.”

 

This blog post was originally published on Labs Explorer.

 


About the author

Ema Pavlović is a graduate student of Media Research at the University of Zagreb, Croatia.

She is currently in France, working on communication and marketing for Labs Explorer, a marketing service provider for R&D teams. Building on a community of 5 000+ labs from private and academic organisations, Labs Explorer provides support to gain visibility in a globally connected and qualitative network while enabling its users to get in touch with stakeholders from complementary research fields. They provide specialised content writing and dissemination services to R&D organisations aiming to open up new funding and collaboration opportunities.

Leave a Reply

Your email address will not be published. Required fields are marked *