Human curation in age of AI

Originally published on Simply Communicate.

Netflix has just announced they are using real people to help their users find new movies to watch. They are trialling a new feature called Collections. In their words, it adds a ‘human touch’ to film recommendations when used on their iOS mobile platform.

It’s a bold move considering that many other publishers prefer to only rely on automated or AI algorithm-based recommendations. But Netflix isn’t about to do away with machines altogether. After all, much of their success has been due to algorithm-based suggestions.

Instead, they are experimenting between how best to use humans and machines, algorithms versus experts, of how to engage and keep their users.

The hard economics of content publishing — with dwindling advertising-based revenues, and readers prone to look elsewhere — is driving publishers to find new ways to create and distribute content.

Many have up until now left the door firmly open for algorithms and automaton (often at the expense of human journalists and editors) to perform a wide variety of tasks, even writing content.

But does placing so much reliance on automation also expose its limitations? For example, can an algorithm ever understand readers’ needs? Or even as Netflix’s trial asks, will readers ever trust judgements made by non-human machines?

Some argue algorithms are less prone to mistakes, unlike their human counterparts. The geeks behind one smart automation tool called Tagmatic (that editors automatically tag content with) understandably laugh at our fallibility when it comes to complex matters such as trying to categorise the content. Editors they say (pointing out the obvious) ‘…don’t even agree with themselves.’

Nevertheless, algorithms and automation tools are hardly immune from criticism. Recently, several cases have shown algorithms to exhibit sexist and racial bias of their own. Elsewhere, there are growing concerns that few understand how any of the technology works or even how to reason with its output.

Algorithms, then, just like humans, are far from bullet-proof. But instead of being primarily about a battle between humans versus machines, new research suggests other factors are at play.

One is whether or not we trust whoever is publishing and distributing content to us. In a paper published earlier this year called ‘My Friends, Editors, Algorithms, and I’ authored by a group of communications researchers, does much to shine a new light on how users consume news-related content online.

Their analysis of over 54,000 users’ reading habits shows the perception of a news outlets’ independence will significantly influence whether to trust suggestions made by its ‘experts’: editors, journalists so on. When we don’t, they suggest, we defer to recommendations made by algorithms.

‘As trust in news organisations, and in the political independence of the news media, falls, people are less likely to agree that selection by editors and journalists is a good way to get news.’

They add more to this, stating that context also matters. What we want from content will shape whether or not we will heed expert advice. As they say, users ‘…do prefer human expertise where it is specific and not generalised.’ That is, where readers are looking for content that conveys personal knowledge, taste, or experience, human experts will trump machine-based suggestions.

On the other hand, ‘…if the recommendation is derived from known rules (as with medical diagnosis or legal counsel) people might rather follow the advice of an algorithm’. But even here, as already shown, algorithms are not completely exonerated from questions of bias.

Nevertheless, what their research confirms is that we will tend to follow expert advice if it reflects our own political or cultural worldview. What some regard as authoritative, others will dismiss as inaccurate, ‘fake’ or biased.

However, most interesting of all, the researchers suggest we prefer content recommendations based upon our own past consumption, rather than what our peers have read. Saying that ‘…respondents considered automated personalisation based on their friends’ past consumption behaviour to be a less good way to get news’. They compared this with manually selected news, or from users’ own past behaviour.

They argue this is due to two reasons. Users either believe their own experience is better than that of their peers, or that they are ‘missing out on important information’.

Although they explain the latter point in a somewhat technical way: different peers’ interests rarely coincide, or the frequency of liking content (such as on Facebook’s activity feed) will vary considerably. All of which, they argue, makes creating a credible list of interesting suggestions unrealistic.

While this research is not revelatory as such, it does reconfirm — especially in an age of algorithms — why we heed some content suggestions over others. The underlying issue is about trust rather than being defined by technology.

This matters in several ways. One is to ask whether or not using technology to precisely target readers will ever truly keep readers engaged. If algorithm-based targeting is perceived to be neutral, is the best way to create a new audience or keep them loyal?

However, the more difficult question to ask (and this remains an age-old one) is of how to rebuild trust in whoever publishes or provides us with content. Readers expect a high degree of choice of where they get their news — whether from editors, journalists or indeed subject-matter experts — all delivered via myriad devices, channels and platforms.

While content is itself becoming more personal often at the expense of what an entire publication stands for. We are more likely to follow and identify with individual authors than agree on everything a publication produces. Readers tend to share content from all over, rather than just sticking to a few familiar sources or just from their immediate peers. We remain curious about the bigger picture.

Technology, in this sense, helps make that happen. Search, and automated recommendations are essential. But is this enough — especially if everyone else is employing similar tactics?

Rather, loyalty is built by establishing a connection between what individual experts, authors, editors, commentators, and so on say that readers can identify with. The task remains the same as it always has been: to create exclusive content and recommendations that offers readers something new — unexpected even — that they cannot find elsewhere.

Good content, as ever, will always win out.

This is why Netflix’s experiment is an interesting one to follow. They realise — as every publisher knows each day they start work — that they need to keep reinventing new ways to keep their users interested by helping them find great movies to watch. Providing such a unique perspective will remain an exclusively human task because no machine will ever come close to being so knowledgeable.