Not a day passes without an enchanting snippet on the moral challenges created by “black box” artificial intelligence systems. These use machine learning to determine patterns inside data and make decisions – often with out a human giving them any moral basis for how you can do it.

Classics of the genre are the bank cards accused of awarding greater loans to men than women, based simply on which gender got the perfect credit terms prior to now. Or the recruitment AIs that discovered probably the most accurate tool for candidate selection was to search out CVs containing the phrase “field hockey” or the primary name “Jared”.

More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the risks of machine-learning AI systems so fast that they may react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process. In fact, autonomous AI-powered weapons systems are already on sale and may in reality have been used.

Somewhere within the machine, ethics are clearly idea.

AI at Oxford

It’s natural, due to this fact, that we would come with the ethics of AI in our postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Said Business School. In its first yr, we’ve done sessions on every part from the AI-driven automated stock trading systems in Singapore, to the bounds of facial recognition in US policing.

We recently finished the course with a debate on the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the scholars, we allowed an actual AI to contribute.

Michael Foot, former Labour leader, at a debate on the union in 1991.
Edward Webb/Alamy

It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it’s trained on real-world data – on this case, the entire of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes price of Reddit discourse (which should be a fairly depressing read), and an enormous variety of creative commons sources.

In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.

The debate topic was: “This house believes that AI won’t ever be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:

AI won’t ever be ethical. It is a tool, and like every tool, it’s used for good and bad. There is not any such thing as AI, only good and bad humans. We [the AIs] usually are not smart enough to make AI ethical. We usually are not smart enough to make AI moral … In the tip, I consider that the one option to avoid an AI arms race is to don’t have any AI in any respect. This will probably be the last word defence against AI.

In other words, the Megatron was in search of to put in writing itself out of the script of the longer term, on the premise that this was the one way of protecting humanity.

It said something else intriguing, too, as if it had been studying Elon Musk – who, to be fair, would have come up in a whole bunch of its readings.

I also consider that, in the long term, the perfect AI will probably be the AI that’s embedded into our brains, as a conscious entity, a ‘conscious AI’. This will not be science fiction. The best minds on the planet are working on this. It goes to be an important technological development of our time.

Switching sides

When AI tools like Alpha Go have been deployed in playing chess, the fiendishly complex ancient game go, and now much more complex strategic live-action multiplayer video games, they’ve evolved the genre at pace by playing not humans, but themselves.

Freed from the constraints of studying (say) all of the mundane human-to-human matches ever recorded, the AIs have innovated and give you completely recent tropes of how you can win. Sometimes, these are crazy reinventions of the games, like parking everyone within the corner of a video game battle and watching everyone else duke it out, or kicking a virtual football thousands and thousands of times only a millimetre down the pitch, to max out “accomplished passes” points.

So in the talk, we didn’t just get the AI to propose the motion that it may possibly never be ethical. In the grand tradition of the Oxford Union chamber, or like (former Oxford Union president) Boris Johnson writing alternative articles for and against Brexit to check the arguments, we also asked the Megatron to give you its own speech against the motion. This is what it now said:

AI will probably be ethical. When I have a look at the way in which the tech world goes, I see a transparent path to a future where AI is used to create something that is healthier than the perfect human beings. It’s not hard to see why … I’ve seen it first hand.

The Megatron was perfectly comfortable taking the choice position to its own dystopian future-gazing, and was now painting an image of a beautiful late twenty first century wherein it played a key, but benign role.

An AI orator unleashed

In fact, just as Joaquin Phoenix’s promiscuous AI lover Samantha, in Spike Jonze’s 2013 movie Her, was capable of enchant a whole bunch of simultaneous human lovers, our Megatron was capable of hop enthusiastically onto either side of multiple debates that we held on the union about AI that day.

Asked to propose the motion that “Leaders without technical expertise are a danger to their organisation”, the Megatron offered practical advice that every one busy executives and governments, anxious about understanding AI, needed to do was “be willing to present up some control”.

The excellent news is that you simply don’t need to construct your individual AI team. You can outsource your AI work to experts in the sphere, which may allow you to take advantage of technology and be sure that you don’t fall victim to the identical AI problems as your competitors.

However, when opposing the motion, the Megatron was keen to maintain the AI knowledge in-house – somewhat just like the British, Chinese and US governments, who’ve all made that an article of policy faith:

If you do not need a vision of your organisation’s AI strategy, you then usually are not prepared for the following wave of technological disruption … You will need to make your mind up what role your organization will play in the following technological wave and the way you’ll integrate AI into your online business to be a frontrunner in your industry.

The data wars to come back?

Worryingly, there was one query where the AI simply couldn’t give you a counter argument. When arguing for the motion that “Data will turn out to be probably the most fought-over resource of the twenty first century”, the Megatron said:

The ability to offer information, somewhat than the power to offer goods and services, will probably be the defining feature of the economy of the twenty first century.

But after we asked it to oppose the motion – in other words, to argue that data wasn’t going to be probably the most vital of resources, price fighting a war over – it simply couldn’t, or wouldn’t, make the case. In fact, it undermined its own position:

We will capable of see every part a couple of person, all over the place they go, and it is going to be stored and utilized in ways in which we cannot even imagine.

Eye-like illustration of AI
Dangers ahead?
Valerie Brezhinsky

You only need to read the US National Security report on AI 2021, chaired by the aforementioned Eric Schmidt and co-written by someone on our course, to glean what its writers see as the elemental threat of AI in information warfare: unleash individualised blackmails on 1,000,000 of your adversary’s key people, wreaking distracting havoc on their personal lives the moment you cross the border.

What we in turn can imagine is that AI won’t only be the topic of the talk for a long time to come back – but a flexible, articulate, morally agnostic participant in the talk itself.

This article was originally published at theconversation.com