“Look away now should you don’t need to know the rating”, they are saying on the news before reporting the football results. But imagine in case your television knew which teams you follow, which ends up to carry back – or knew to bypass football altogether and inform you about something else. With media personalisation, which we’re working on with the BBC, that kind of thing is becoming possible.

Significant challenges remain for adapting live production, but there are other points to media personalisation that are closer. Indeed, media personalisation already exists to an extent. It’s like your BBC iPlayer or Netflix suggesting content to you based on what you’ve watched previously, or your Spotify curating playlists you would possibly like.

But what we’re talking about is personalisation throughout the programme. This could include adjusting the programme duration (you may be offered an abridged or prolonged version), adding subtitles or graphics, or enhancing the dialogue (to make it more intelligible if, say, you’re in a loud place or your hearing is beginning to go). Or it’d include providing extra information related to the programme (a bit like you possibly can access now with BBC’s red button).

The big difference is that these features wouldn’t be generic. They would see shows re-packaged in accordance with your individual tastes, and tailored to your needs, depending on where you’re, what devices you may have connected and what you’re doing.

To deliver recent sorts of media personalisation to audiences at scale, these features might be powered by artificial intelligence (AI). AI works via machine learning, which performs tasks based on information from vast datasets fed in to coach the system (an algorithm).

This is the main target of a partnership between the BBC and the University of Surrey’s Centre for Vision, Speech and Signal Processing. Known as Artificial Intelligence for Personalised Media Experiences, or AI4ME, this partnership is searching for to assist the BBC higher serve the general public, especially recent audiences.

Acknowledging AI’s difficulties

The AI principles of the Organisation for Economic Cooperation and Development (OECD)
require AI to learn humankind and the planet, incorporating fairness, safety, transparency and accountability.

Yet AI systems are increasingly accused of automating inequality as a consequence of biases of their training, which might reinforce existing prejudices and drawback vulnerable groups. This can take the shape of gender bias in recruitment, or racial disparities in facial recognition technologies, for instance.

Another potential problem with AI systems is what we confer with as generalisation. The first recognised fatality from a self-driving automobile is an example of this. Having been trained on road footage, which likely captured many cyclists and pedestrians individually, it didn’t recognise a girl pushing her bike across a road.

We subsequently must keep retraining AI systems as we learn more about their real-world behaviour and our desired outcomes. It’s unimaginable to present a machine instructions for all eventualities, and unimaginable to predict all potential unintended consequences.



We don’t yet fully know what kind of problems our AI could present within the realm of personalised media. This is what we hope to search out out through our project. But for instance, it may very well be something like dialogue enhancement working higher with male voices than female voices.

Ethical concerns don’t at all times cut through to turn into a priority in a technology-focused enterprise, unless government regulation or a media storm demand it. But isn’t it higher to anticipate and fix these problems before getting thus far?

The earlier we are able to confront AI engineers with any challenges, the earlier they will get to work.
Rawpixel.com/Shutterstock

The citizen council

To design our personalisation system well, it calls for public engagement from the outset. This is significant for bringing a broad perspective into technical teams which will suffer from narrowly defined performance metrics, “group think” inside their departments, and a scarcity of diversity.

Surrey and the BBC are working together to check an approach to herald people – normal people, relatively than experts – to oversee AI’s development in media personalisation. We’re trialling “citizen councils” to create a dialogue, where the insight we gain from the councils will inform the event of the technologies. Our citizen council could have diverse representation and independence from the BBC.

First, we frame the theme for a workshop around a selected technology we’re investigating or a design issue, comparable to using AI to chop out a presenter in a video, for substitute into one other video. The workshops draw out opinions and facilitate discussion with experts across the theme, comparable to one among the engineers. The council then consults, deliberates and produces its recommendations.

The themes give the citizen council a strategy to review specific technologies against each of the OECD AI principles and to debate the suitable uses of non-public data in media personalisation, independent of corporate or political interests.

There are risks. We might fail to adequately reflect diversity, there may be misunderstanding around proposed technologies or an unwillingness to listen to others’ views. What if the council members are unable to succeed in a consensus or begin to develop a bias?



We cannot measure what disasters are avoided by going through this process, but recent insights that influence the engineering design or recent issues that allow remedies to be considered earlier might be signs of success.

And one round of councils just isn’t the top of the story. We aim to use this process throughout this five-year engineering research project. We will share what we learn and encourage other projects to take up this approach to see the way it translates.

We consider this approach can bring broad ethical considerations into the purview of engineering developers in the course of the earliest stages of the design of complex AI systems. Our participants are usually not beholden to the interests of huge tech or governments, yet they convey the values and beliefs of society.

This article was originally published at theconversation.com