Data-driven decision-making and personalised nudging

Stuart Mills continues his series on data policy theory, focusing on the opportunities and problems of nudging.

 

So far, I have explored how data have a philosophical dimension, and that the way we define data can have significant implications for how we understand who should control, access and ultimately benefit from the digital economy.

I have also considered the political dimensions of data, and explored ideas of data being individualistic, versus data being collective entities. The grand conclusion of both the political and philosophical theories of data was a toolkit for thinking about data ownership, and in my last blogpost, I used that toolkit to construct various models of data ownership within the digital economy.

In the next two blog posts, I’m going to consider some more practical questions of data policy. Next time, I’m going to explore how money and economic production systems themselves are becoming digitised via technologies such as fintech and blockchain. I will also consider whether these innovations represent new phenomena for the economy, or if they’re actually part of a wider story regarding economic development.

In this blogpost, however, I’m going to be somewhat more indulgent, and write specifically about the ideas covered in my PhD thesis. These ideas pertain to the intersection of behavioural economics and data, and how personalised choice environments can be both extremely helpful and potentially damaging.

Beyond the Great Hack

The story of Cambridge Analytica, Facebook and online microtargeting during various elections in 2016 has attracted a lot of attention – understandably – but the actual mechanisms of influence remind much obscured.

The Netflix documentary The Great Hack, for instance, describes the techniques used by Cambridge Analytica as “weapons of mass destruction”. I’m not going to comment on whether this characterisation (in terms of destructive power) is fair, but I do think it’s prudent for us to have a fair grasp of the behavioural mechanisms at play before proceeding with such evocative language.

A good place to start may be Wendy Liu, and her recent book Abolish Silicon Valley.

In it, Liu offers her account as a programmer trying to find success in a start-up within the Silicon Valley tech-space. Liu’s start up was about using social media profiles to micro-target advertising to social media users, and she even notes the close parallel between what she was doing, and what Cambridge Analytica were doing at the same time. But Liu, somewhat cynically, also notes a limitation to the model of Cambridge Analytica and their rivals: while these companies were great at identifying who to target, less attention was paid regarding what to target them with.

In many ways this is a restatement of the classic marketing problem statement – it’s not just about having the right audience and grabbing their attention; there needs to be some call to action!

My work on personalised nudging tries to use data to create behavioural interventions which might be expected to exhibit some change in their behaviour.

What is Personalised Nudging?

I have had an odd obsession with the concept of personalised nudging. Nudging – which I will explain momentarily – is something I have been very interested in, but also rather critical of. For me, personalisation is a natural evolution of nudging, and resolves several problems that pervade nudge theory.

The opportunity to resolve these problems has really driven my interest.

But what is nudging?

The term ‘nudge’ is by now relatively old, being coined in the 2008 book Nudge, co-authored by Nobel laurate Richard Thaler and law scholar Cass Sunstein. The eponymous nudge has something of a troubled definition (certainly a challenged one), but I personally define it as

“a small change in how a prospect is framed which can have a significant and predictable impact on the decision which is made.”

By small, I generally mean it shouldn’t force people to choose an option, and it shouldn’t overly incentivise the selection of an option.

For instance, automatically enrolling people as organ donors can significantly increase the number of organ donors in a country, even though people could choose not to be organ donors (i.e. they could opt-out). More recently, nudges have been used to help tackle the spread of Covid-19; for instance, the 2-metre separation rule is a very typical rule-of-thumb nudge.

Nudges aren’t perfect, for lots of reasons. But one of the biggest reasons, in my opinion, is that people are different.

Say, for instance, I want to encourage people to save more. I might use a social norm nudge – which informs people of the average savings rate – to get those who save too little to save more. And indeed, this nudge may work. Unfortunately, I often can’t distinguish between who should save more, and who is saving enough, which can create problems if I nudge the wrong person.

In a 2018 study, for instance, Thunström, Gilbert and Jones-Ritten found that nudging people who already had sufficient savings caused them to save too much, and as a result of not spending their money, were made less happy.

This is where the data comes in.

Lots of data exist which could be used to identify how much money people have, as an example. Using these data, we might choose to nudge those who have no savings but the capacity to save to save more, while nudging those that don’t fall into this group in a different way. In principle, we are personalising the nudge that we use, based on individual characteristics.

Personalisation is more complicated than this, but I don’t want to hammer this point too much. For those who are interested, my recent paper on the topic offers what I think is a good overview of the latest developments in the field (though I would say that…).

What I want to emphasise here is the role of data in all of this. Sometimes, data – and very personal data – can be used for initiatives which improve social welfare. For instance, we would generally regard more saving as a positive thing. And what’s more, deliberate ignorance of individual differences can create significant harms. For example, as Thunström, Gilbert and Jones-Ritten find, people can be nudged to save more, but if the people saving are suffering as a result, a seemingly good policy is actually a bad one.

The flip side to all of this is manipulation and exploitation. Personalised nudging solves the question of what to target people with, once targets have been pre-selected. In conjunction with micro-targeting technologies and bad actors, such nudges could quickly become harmful to society.

I am reminded, for instance, of Evgeny Morozov’s argument in The Net Delusion; for every positive technology development, there is often an opportunity for that same development to be used negatively. Let’s consider some of these challenges.

Nudging Next to a Cliff

Cambridge Analytica is the proto-typical example of how personal preference data can be transformed into cognitive profiles to start targeting individuals. In many ways, however, what Cambridge Analytica did isn’t all that interesting.

As Liu notes, one of the big issues her start-up faced was a saturated market – lots of people were working on this stuff. Now I differentiate micro-targeting from personalised nudging (though crossover exists, with a 2017 paper about nudge theory and Facebook likes published by academics distantly associated with Cambridge Analytica), but Cambridge Analytica remains quite a case study from a data policy perspective.

My central question is this: why did people get angry?

On the face of it, this may seem rather flippant.

But Cambridge Analytica was not kicked off of Facebook for their advertising activity; it was for using data which they shouldn’t have had. To an extent, this violation is why Facebook received much backlash following the revelations in 2018, but this is tangential. From a targeting perspective, Cambridge Analytica utilised the same basic functions as many other advertisers did and continue to do. Targeted advertising is fast becoming an accepted facet on digital life, and as Shoshana Zuboff notes in the case of Google, there are often mutual advantages to seeing targeted, personalised content.

My hypothesis about Cambridge Analytica is a behavioural one.

When I’m targeted with an advert to buy something on Facebook, my decision to purchase that item or not generally affects me only.

When I’m targeted with an advert to vote for someone, my decision impacts many other people.

With my behavioural economist hat on, I would argue the targeting of a product, and the targeting of a political candidate, do not represent equivalent decisions. In fact, in nudge theory, we would describe the former decision as a pro-self choice, while the latter is a pro-social choice.

There is no clear-cut framework for delineating when targeting, nudging and personalised nudging should and should not be used.

Certainly, I agree with Cass Sunstein that when data are used, we should be clear why they’re being used. But equally, even if we have the data, and we have a reason to think using it to influence behaviour would be good, I think it’s wise to return to the base decision itself, and ask: who is this decision impacting?

The risk of ignoring this is to nudge too hard, or too far, or in the wrong direction entirely. And broadly, we should remember, what we do with data is just as central to data policy as the question of data access itself.


About the author

Stuart Mills is a PhD researcher at the Manchester Metropolitan University Future Economies Research Centre. His research includes behavioural economics, behavioural public policy, hypernudging and data politics.

Read Stuart’s other posts in this series.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *