Personalisation vs transparency: where next for algorithms?

Arguably the defining trend in improving the customer experience over recent years has been personalisation: joining up channels, recognising customers and anticipating their needs.

Some of this is the role of customer service or sales teams. But the bigger part – particularly online – is automated, relying on systems that compare an individual customer’s interaction with vast stores of data about their behaviour and the previous interactions of millions of others. This can then be used to predict – with varying degrees of accuracy – what they might want, or do, next.

We are all familiar with the results, which can be seen in the proliferation of personalised recommendations for additional purchases, highly targeted advertising and customised content.

r-mo-321406-unsplash

Algorithms hit the headlines

These calculations are based on algorithms: carefully constructed sequences of instructions to solve a specific problem or accomplish a goal. Not so long ago, algorithms were the preserve of mathematicians and coders. But in the last year or two, they’ve stumbled into mainstream consciousness, predominantly on the back of some major controversies.

The biggest of these was when it emerged that Cambridge Analytica had used Facebook data to build algorithms – more than 250 of them, according to one of those involved[1] – so its clients could micro target advertising messages during elections.

That in turn led to growing awareness that Facebook itself makes extensive use of algorithms, such as to determine the order in which posts appear on an individual’s news feed. In simple terms, this algorithm is based on the fact that people are more likely to share and comment on a picture of their daughter’s dog than their neighbour’s holiday, and it enables Facebook to prioritise content accordingly.

Suspicious minds

A quick Google search – itself autocompleted using algorithms – now provides all manner of tips for outsmarting “the Facebook algorithm”. That’s something of a misnomer: there’s more than one at work. But it serves to demonstrate a growing public suspicion towards the use of algorithms.

One factor in that suspicion relates to the fact that algorithms may appear biased: studies have highlighted, for instance, that algorithms have promoted adverts for engineering jobs more frequently to men than to women. Yet while this might sit awkwardly with equal opportunities legislation, it arguably demonstrates the algorithm’s effectiveness; on a purely statistical level, more men currently work in engineering, so the advert is arguably better targeted to men.

Put another way, it’s not the algorithm that is biased; instead, it is accurately reflecting a bias in society to predict where the advert can be most successfully used.

In some cases, the bias may lie with those creating the algorithm. Studies have identified that applicants for a credit card are offered far higher interest rates than average if the system establishes the applicant is in marriage counselling.[2]  While data may support the view that those in marriage counselling are in higher risk of defaulting on payments, the algorithm takes this factor into account only because it has been coded to do so.

Seeing the questions

Financial services have, of course, been using algorithms for years in processing loan or insurance applications. They’re why we’re asked that succession of increasingly targeted questions and find ourselves guessing the best way to describe our professions.

There is a degree of transparency about these algorithms: you know what you’re being asked, even if you don’t know what ‘weight’ the provider places on your answers. Further, thanks to GDPR, there is now a right to challenge decisions made on the basis of “automated individual decision-making and profiling”.[3]

But some decisions prove hard to challenge – as shown by stories from the US relating to the use of algorithms in sentencing criminals. In one case, a man was convicted of two relatively minor offences which would not normally result in imprisonment. However, a range of data related to the offender was fed into a risk assessment algorithm. It predicted – based on this profiling data – that the man was likely to re-offend and the judge sentenced him to six years in jail. Unsurprisingly, the man appealed, but the US Supreme Court declined to hear the case.[4]

The algorithm itself remains a secret – as do the factors that it considers in its risk assessment.

Gable

Time for transparency?

This lack of transparency is something that regulators are beginning to consider. The Consumer Protection and Commerce subcommittee of the US House of Representatives has held hearings on the use of algorithms[5] and Angela Merkel has publicly called for transparency about algorithm use from the likes of Google and Facebook.[6]

The question is what transparency means. A published algorithm, running to thousands or millions of lines of code, would be meaningless to the majority of us. Furthermore, while they are originally created by coders, and may reflect the coder’s (or their employer’s) bias, many algorithms are built with some kind of machine learning capability. That means the algorithm published this morning may well have changed by the afternoon. A report by the UK Government Office for Science further highlighted that “simply sharing static code provides no assurance it was actually used in a particular decision.”[7]

Awareness builds trust

Rather than giving consumers the algorithm itself, a more practical solution may revolve around alerting customers to the fact that an algorithm is being used. That could be as simple as a message like those used for cookie control (though not necessarily with the ability to opt out) and would enable customers to enjoy the benefits of algorithms – the Netflix or Amazon recommendations, for example – but also to be aware that their online behaviour is being tracked.

Regulation to this effect may well be coming down the line, but to me, there’s an opportunity for organisations to use algorithm transparency for competitive advantage. It’s well-established that transparency, whether that’s about fees, delivery times or product and service details, is a major influence on how much customers trust a business: why not extend it to data science too?

Imagine the contrast between booking flights with a provider that tells you it’s using algorithms to return prices – and that those prices may be affected by you deferring the decision – and one that simply presents a different price each time. Which one would you trust more, and prefer to use? You don’t need to know the full details to understand the principle, and it’s here that transparency and personalisation can come together.

Algorithms driving personalisation

This is an area Ember is beginning to work in. We are currently testing an algorithm which helps one of the UK’s largest high street retailers to improve the effectiveness of its payment collections.

The algorithm is used when customers fall into arrears on payments; using a vast quantity of historic data, we have sought to identify which customers would benefit from talking to field agents at an early stage, to prevent arrears escalating. This then enables the provider to dispatch field agents to the customers most likely to respond positively – either making a payment or agreeing a practical solution at the first visit, which is better for the provider and the customer.

It’s early days, but the results so far suggest the algorithm is leading to higher efficiency and more timely revenue collection.

This is an example of how algorithms can be used effectively to personalise the service provided – and there are undoubtedly many others. But for me, for those opportunities to be taken, it’s vital we are open about where algorithms and predictive modelling are being used; that not only will that help build trust, it will also help us to make our algorithms more accurate and effective.