Machine-learning systems are increasingly worming their way through our on a regular basis lives, difficult our moral and social values and the foundations that govern them. These days, virtual assistants threaten the privacy of the house; news recommenders shape the way in which we understand the world; risk-prediction systems tip social employees on which children to guard from abuse; while data-driven hiring tools also rank your possibilities of landing a job. However, the ethics of machine learning stays blurry for a lot of.

Searching for articles on the topic for the young engineers attending the Ethics and Information and Communications Technology course at UCLouvain, Belgium, I used to be particularly struck by the case of Joshua Barbeau, a 33-year-old man who used an internet site called Project December to create a conversational robot – a – that will simulate conversation along with his deceased fiancée, Jessica.

Conversational robots mimicking dead people

Known as a , the sort of chatbot allowed Barbeau to exchange text messages with a synthetic “Jessica”. Despite the ethically controversial nature of the case, I rarely found materials that went beyond the mere factual aspect and analysed the case through an explicit normative lens: why would it not be right or incorrect, ethically desirable or reprehensible, to develop a deadbot?

Before we grapple with these questions, let’s put things into context: Project December was created by the games developer Jason Rohrer to enable people to customize chatbots with the personality they desired to interact with, provided that they paid for it. The project was built drawing on an API of GPT-3, a text-generating language model by the factitious intelligence research company OpenAI. Barbeau’s case opened a rift between Rohrer and OpenAI because the corporate’s guidelines explicitly forbid GPT-3 for use for sexual, amorous, self-harm or bullying purposes.

Calling OpenAI’s position as hyper-moralistic and arguing that folks like Barbeau were “consenting adults”, Rohrer shut down the GPT-3 version of Project December.

While we may all have intuitions about whether it is correct or incorrect to develop a machine-learning deadbot, spelling out its implications hardly makes for a straightforward task. This is why it is necessary to handle the moral questions raised by the case, step-by-step.

Is Barbeau’s consent enough to develop Jessica’s deadbot?

Since Jessica was an actual (albeit dead) person, Barbeau consenting to the creation of a deadbot mimicking her seems insufficient. Even once they die, people should not mere things with which others can do as they please. This is why our societies consider it incorrect to desecrate or to be disrespectful to the memory of the dead. In other words, we have now certain moral obligations towards the dead, insofar as death doesn’t necessarily imply that folks stop to exist in a morally relevant way.

Likewise, the talk is open as as to whether we should always protect the dead’s fundamental rights (e.g., privacy and private data). Developing a deadbot replicating someone’s personality requires great amounts of non-public information equivalent to social network data (see what Microsoft or Eternime propose) which have proven to disclose highly sensitive traits.

If we agree that it’s unethical to make use of people’s data without their consent while they’re alive, why should it’s ethical to achieve this after their death? In that sense, when developing a deadbot, it seems reasonable to request the consent of the one whose personality is mirrored – on this case, Jessica.

Death separates us from family members, but could machine learning bring them back to digital life?
Philippe Lorez/AFP

When the imitated person gives the green light

Thus, the second query is: would Jessica’s consent be enough to contemplate her deadbot’s creation ethical? What if it was degrading to her memory?

The limits of consent are, indeed, a controversial issue. Take as a paradigmatic example the “Rotenburg Cannibal”, who was sentenced to life imprisonment despite the indisputable fact that his victim had agreed to be eaten. In this regard, it has been argued that it’s unethical to consent to things that might be detrimental to ourselves, be it physically (to sell one’s own vital organs) or abstractly (to alienate one’s own rights).

In what specific terms something may be detrimental to the dead is a very complex issue that I won’t analyse in full. It is price noting, nevertheless, that even when the dead can’t be harmed or offended in the identical way than the living, this doesn’t mean that they’re invulnerable to bad actions, nor that these are ethical. The dead can suffer damages to their honour, popularity or dignity (for instance, posthumous smear campaigns), and disrespect toward the dead also harms those near them. Moreover, behaving badly toward the dead leads us to a society that’s more unjust and fewer respectful with people’s dignity overall.

Finally, given the malleability and unpredictability of machine-learning systems, there may be a risk that the consent provided by the person mimicked (while alive) doesn’t mean way more than a blank check on its potential paths.

Taking all of this into consideration, it seems reasonable to conclude if the deadbot’s development or use fails to correspond to what the imitated person has agreed to, their consent must be considered invalid. Moreover, if it clearly and intentionally harms their dignity, even their consent shouldn’t be enough to contemplate it ethical.

Who takes responsibility?

A 3rd issue is whether or not artificial intelligence systems should aspire to mimic of human behaviour (irrespective here of whether this is feasible).

This has been a long-standing concern in the sector of AI and it’s closely linked to the dispute between Rohrer and OpenAI. Should we develop artificial systems able to, for instance, caring for others or making political decisions? It seems that there’s something in these skills that make humans different from other animals and from machines. Hence, it is necessary to notice instrumentalising AI toward techno-solutionist ends equivalent to replacing family members may result in a devaluation of what characterises us as human beings.

The fourth ethical query is who bears responsibility for the outcomes of a deadbot – especially within the case of harmful effects.

Imagine that Jessica’s deadbot autonomously learned to perform in a way that demeaned her memory or irreversibly damaged Barbeau’s mental health. Who would take responsibility? AI experts answer this slippery query through two primary approaches: first, responsibility falls upon those involved within the design and development of the system, so long as they achieve this in keeping with their particular interests and worldviews; second, machine-learning systems are context-dependent, so the moral responsibilities of their outputs must be distributed amongst all of the agents interacting with them.

I place myself closer to the primary position. In this case, as there may be an explicit co-creation of the deadbot that involves OpenAI, Jason Rohrer and Joshua Barbeau, I consider it logical to analyse the extent of responsibility of every party.

First, it could be hard to make OpenAI responsible after they explicitly forbade using their system for sexual, amorous, self-harm or bullying purposes.

It seems reasonable to attribute a major level of ethical responsibility to Rohrer because he: (a) explicitly designed the system that made it possible to create the deadbot; (b) did it without anticipating measures to avoid potential opposed outcomes; (c) was aware that it was failing to comply with OpenAI’s guidelines; and (d) profited from it.

And because Barbeau customised the deadbot drawing on particular features of Jessica, it seems legitimate to carry him co-responsible within the event that it degraded her memory.

Ethical, under certain conditions

So, coming back to our first, general query of whether it is moral to develop a machine-learning deadbot, we could give an affirmative answer on the condition that:

  • each the person mimicked and the one customising and interacting with it have given their free consent to as detailed an outline as possible of the design, development and uses of the system;

  • developments and uses that don’t follow what the imitated person consented to or that go against their dignity are forbidden;

  • the people involved in its development and people who make the most of it take responsibility for its potential negative outcomes. Both retroactively, to account for events which have happened, and prospectively, to actively prevent them to occur in the long run.

This case exemplifies why the ethics of machine learning matters. It also illustrates why it is crucial to open a public debate that may higher inform residents and help us develop policy measures to make AI systems more open, socially fair and compliant with fundamental rights.

This article was originally published at theconversation.com