What does impact mean to me?

David Kingman

Our #DataImpactFellows work in diverse areas, some of them combining more than one role. This gives them a great chance to explore the question “What does impact mean to me?” Here are David Kingman’s thoughts.



Over the past year I’ve had the experience of working for two different employers simultaneously, one of which is a small think tank while the other is a large public sector organisation.

Although the work I do for the two organisations is fairly similar, as it mainly revolves around generating new insights from socio-economic and demographic datasets, I’ve found that the type of outputs I produce from these data tend to be quite different, which is largely a consequence of the different audiences who are consuming my work in these two roles.

The Intergenerational Foundation

Intergenerational Foundation logo

As the Intergenerational Foundation (IF) is a think tank, my work there is produced for several different audiences:

  • The general public, who we hope will discover our research through traditional media coverage and on social media;
  • Journalists, who we try to attract by writing eye-catching press releases;
  • Policy-makers, who we try to reach by summarising my analysis in responses to public consultations and by giving evidence to All Party Parliamentary Groups.

Hopefully, if a piece of work succeeds in generating impact among one of these audiences then that should make the research more likely to create impact with the other ones as well.

For example, last year IF published a piece of research which investigated the inadequate size of too many of the new-build homes in England which have been built using permitted development rights, which was based on my analysis of 18 million Energy Performance Certificates. We believe that the media coverage which this research received contributed to the recent announcement that all homes built under this mechanism in future will have to abide by the Nationally Described Space Standard.

The Greater London Authority

GLA logo

The analysis which I do on behalf of the Greater London Authority (GLA) is similar in the sense that it often has more than one audience, but the audiences tend to be different. Also, the purpose of my work tends to differ between my two roles.

Where at IF the end goal of my analysis is usually to try produce changes in government policy by convincing my audience that there is a strong case for instituting a reform that would lead to better outcomes for young people, the goal of most of my work at the GLA is to present the facts to the audience in a much more neutral fashion to help them reach their own conclusions.

Having said that, a lot of my work at the GLA is still aimed at the general public, and we spend a lot of time thinking about how we can serve their needs better by making improvements to the London Datastore, which is our main data portal. For example, the first big task that I was asked to complete after I joined the GLA was to audit all of the datasets which my team was maintaining on the London Datastore, and to work out how the ones which we wanted to keep on maintaining could be reorganised to make them more accessible to our users.

Rather than only publishing analysis in traditional formats, such as PDF reports (which tends to be the way we publish our research at IF), the GLA Intelligence Unit is also increasingly pushing the London Datastore in new directions by incorporating a larger amount of interactive content – such as our interactive mapping tool for exploring local variations in Covid-19 mortality across London – which enable the end user to the play around with the data and draw their own conclusions from it.


In contrast to the work which I do for IF, a lot more of my work at the GLA is focused on generating impact within the wider analytical community, both inside and outside the GLA itself.

For example, I completed a big piece of work over the summer which involved building a prototype Reproducible Analytical Pipeline (RAP) using R and Github. A RAP is a computer program which automates the different stages which are involved in undertaking a piece of data analysis. This can include scraping raw data off a website, cleaning and wrangling it into a different format, and then extracting insights from it by creating visualisations and summary statistics, which will then be automatically updated each time the program runs.

The key thing about this piece of work was that, in addition to producing the analysis itself, an important goal of that project was to be able to have a working codebase hosted on Github for building RAPs which we could share with other analysts who wanted to implement a RAP using R themselves.

Given that I now work almost exclusively in both of my jobs with open-source software tools which are based on work which thousands of other developers have contributed for free, I think that publishing code which other users may end up using in their own projects is an important new way of generating impact from the things I do with data. Others agree, including the Social Metrics Commission

The other way in which the audiences who I am producing analysis for differ between my two jobs is that my work at the GLA is often for internal customers, such as policy teams or one of the deputy mayors, whereas as I outlined above the vast majority of my work at IF is public-facing.

Producing analysis for this type of audience often involves being able to distil a complicated piece of analysis down into a small number of key points, which is also where being able to produce impactful data visualisations using R’s ggplot package is an important skill to master.

When I realised that producing briefings in the form of slide decks would be an important part of my role at the GLA, I learned how to use an R package called officer which enables you to create PowerPoint presentations directly from R code without having to use PowerPoint itself, which has made my workflow for building slide decks from raw data much more efficient. This has been beneficial to the wider team, because it makes it easier to copy information between different presentations and to keep the ones which we’ve already done up to date when new data comes along.

Final thoughts

What I’ve learned about impact from these two roles is that a piece of analysis generates impact by being both interesting and useful to the audience which it is attempting to influence.

The same project can generate impact among a range of different audiences if you can succeed in finding a way of producing outputs in the right format, and I’ve found that using modern open-source tools has made it much easier for me to communicate my work to different audiences at the same time.

David Kingman is one of the UK Data Service 2019-21 Data Impact Fellows.

He currently divides his time between working three days a week as the Senior Researcher at the Intergenerational Foundation, a think tank which researches inequalities between different generations in the UK, and two days a week as the Senior Research and Statistical Analyst within the Greater London Authority City Intelligence Unit’s Demography team.



Leave a Reply

Your email address will not be published. Required fields are marked *