From self-driving cars, to digital assistants, artificial intelligence (AI) is fast becoming an integral technology in our lives today. But this same technology that can assist to make our day-to-day life easier can be being incorporated into weapons to be used in combat situations.

Weaponised AI features heavily in the safety strategies of the US, China and Russia. And some existing weapons systems already include autonomous capabilities based on AI, developing weaponised AI further means machines could potentially make decisions to harm and kill people based on their programming, without human intervention.

Countries that back using AI weapons claim it allows them to answer emerging threats at greater than human speed. They also say it reduces the danger to military personnel and increases the flexibility to hit targets with greater precision. But outsourcing use-of-force decisions to machines violates human dignity. And it’s also incompatible with international law which requires human judgement in context.

Indeed, the role that humans should play in use of force decisions has been an increased area of focus in lots of United Nations (UN) meetings. And at a recent UN meeting, states agreed that it’s unacceptable on ethical and legal grounds to delegate use-of-force decisions to machines – “with none human control in anyway”.

But while this may increasingly sound like excellent news, there continues to be major differences in how states define “human control”.

The problem

A more in-depth take a look at different governmental statements shows that many states, including key developers of weaponised AI resembling the US and UK, favour what’s often known as a distributed perspective of human control.

This is where human control is present across your entire life-cycle of the weapons – from development, to make use of and at various stages of military decision-making. But while this may increasingly sound sensible, it actually leaves quite a lot of room for human control to turn into more nebulous.

Algorithms are starting to alter the face of warfare.
Mykola Holyutyak/Shutterstock

Taken at face value, recognising human control as a process reasonably than a single decision is correct and vital. And it reflects operational reality, in that there are multiple stages to how modern militaries plan attacks involving a human chain of command. But there are drawbacks to relying upon this understanding.

It can, for instance, uphold the illusion of human control when in point of fact it has been relegated to situations where it doesn’t matter as much. This risks making the general quality of human control in warfare dubious. In that it’s exerted in all places generally and nowhere specifically.

This could allow states to focus more on early stages of research and development and fewer so on specific decisions around using force on the battlefield, resembling distinguishing between civilians and combatants or assessing a proportional military response – that are crucial to comply with international law.

And while it could sound reassuring to have human control from the research and development stage, this also glosses over significant technological difficulties. Namely, that current algorithms usually are not predictable and comprehensible to human operators. So even when human operators supervise systems applying such algorithms when using force, they usually are not in a position to understand how these systems have calculated targets.

Life and death with data

Unlike machines, human decisions to make use of force can’t be pre-programmed. Indeed, the brunt of international humanitarian law obligations apply to actual, specific battlefield decisions to make use of force, reasonably than to earlier stages of a weapons system’s lifecycle. This was highlighted by a member of the Brazilian delegation on the recent UN meetings.

Adhering to international humanitarian law within the fast-changing context of warfare also requires constant human assessment. This cannot simply be done with an algorithm. This is very the case in urban warfare, where civilians and combatants are in the identical space.

Ultimately, to have machines which are in a position to make the choice to finish people’s lives violates human dignity by reducing people to things. As Peter Asaro, a philosopher of science and technology, argues: “Distinguishing a ‘goal’ in a field of knowledge just isn’t recognising a human person as someone with rights.” Indeed, a machine can’t be programmed to understand the value of human life.

Robot tank on road.
Russia’s ‘Platform-M’ combat robot which could be used each for patrolling and attacks.
Shutterstock/Goga Shutter

Many states have argued for brand spanking new legal rules to make sure human control over autonomous weapons systems. But a number of others, including the US, hold that existing international law is sufficient. Though the uncertainty surrounding what meaningful human control actually is shows that more clarity in the shape of recent international law is required.

This must deal with the essential qualities that make human control meaningful, while retaining human judgement within the context of specific use-of-force decisions. Without it, there’s a risk of undercutting the worth of recent international law geared toward curbing weaponised AI.

This is essential because without specific regulations, current practices in military decision-making will proceed to shape what’s considered “appropriate” – without being critically discussed.

This article was originally published at