Movies reminiscent of 2001: A Space Odyssey, Blade Runner and Terminator brought rogue robots and computer systems to our cinema screens. But nowadays, such classic science fiction spectacles don’t seem up to now faraway from reality.

Increasingly, we live, work and play with computational technologies which might be autonomous and intelligent. These systems include software and hardware with the capability for independent reasoning and decision making. They work for us on the factory floor; they resolve whether we will get a mortgage; they track and measure our activity and fitness levels; they clean our lounge floors and cut our lawns.

Autonomous and intelligent systems have the potential to affect almost every aspect of our social, economic, political and personal lives, including mundane on a regular basis points. Much of this seems innocent, but there may be reason for concern. Computational technologies impact on every human right, from the suitable to life to the suitable to privacy, freedom of expression to social and economic rights. So how can we defend human rights in a technological landscape increasingly shaped by robotics and artificial intelligence (AI)?

AI and human rights

First, there may be an actual fear that increased machine autonomy will undermine the status of humans. This fear is compounded by an absence of clarity over who will likely be held to account, whether in a legal or a judgment of right and wrong, when intelligent machines do harm. But I’m unsure that the main focus of our concern for human rights should really lie with rogue robots, because it seems to at present. Rather, we must always worry concerning the human use of robots and artificial intelligence and their deployment in unjust and unequal political, military, economic and social contexts.

This worry is especially pertinent with respect to lethal autonomous weapons systems (LAWS), often described as killer robots. As we move towards an AI arms race, human rights scholars and campaigners reminiscent of Christof Heyns, the previous UN special rapporteur on extrajudicial, summary or arbitrary executions, fear that using LAWS will put autonomous robotic systems in command of life and death decisions, with limited or no human control.

AI also revolutionises the link between warfare and surveillance practices. Groups reminiscent of the International Committee for Robot Arms Control (ICRAC) recently expressed their opposition to Google’s participation in Project Maven, a military program that uses machine learning to analyse drone surveillance footage, which may be used for extrajudicial killings. ICRAC appealed to Google to be sure that the information it collects on its users isn’t used for military purposes, joining protests by Google employees over the corporate’s involvement within the project. Google recently announced that it won’t be renewing its contract.

In 2013, the extent of surveillance practices was highlighted by the Edward Snowden revelations. These taught us much concerning the threat to the suitable to privacy and the sharing of information between intelligence services, government agencies and personal corporations. The recent controversy surrounding Cambridge Analytica’s harvesting of non-public data via using social media platforms reminiscent of Facebook continues to cause serious apprehension, this time over manipulation and interference into democratic elections that damage the suitable to freedom of expression.

Meanwhile, critical data analysts challenge discriminatory practices related to what they call AI’s “white guy problem”. This is the priority that AI systems trained on existing data replicate existing racial and gender stereotypes that perpetuate discriminatory practices in areas reminiscent of policing, judicial decisions or employment.

AI can replicate and entrench stereotypes.

Ambiguous bots

The potential threat of computational technologies to human rights and to physical, political and digital security was highlighted in a recently published study on The Malicious Use of Artificial Intelligence. The concerns expressed on this University of Cambridge report should be taken seriously. But how should we take care of these threats? Are human rights ready for the era of robotics and AI?

There are ongoing efforts to update existing human rights principles for this era. These include the UN Framing and Guiding Principles on Business and Human Rights, attempts to jot down a Magna Carta for the digital age and the Future of Life Institute’s Asilomar AI Principles, which discover guidelines for ethical research, adherence to values and a commitment to the longer-term beneficent development of AI.

These efforts are commendable but not sufficient. Governments and government agencies, political parties and personal corporations, especially the leading tech firms, must commit to the moral uses of AI. We also need effective and enforceable legislative control.

Whatever recent measures we introduce, it is vital to acknowledge that our lives are increasingly entangled with autonomous machines and intelligent systems. This entanglement enhances human well-being in areas reminiscent of medical research and treatment, in our transport system, in social care settings and in efforts to guard the environment.

But in other areas this entanglement throws up worrying prospects. Computational technologies are used to observe and track our actions and behaviours, trace our steps, our location, our health, our tastes and our friendships. These systems shape human behaviour and nudge us towards practices of self-surveillance that curtail our freedom and undermine the ideas and ideals of human rights.

And herein lies the crux: the capability for dual use of computational technologies blurs the road between beneficent and malicious practices. What’s more, computational technologies are deeply implicated within the unequal power relationships between individual residents, the state and its agencies, and personal corporations. If unhinged from effective national and international systems of checks and balances, they pose an actual and worrying threat to our human rights.

This article was originally published at