Ibrahim Diallo was allegedly fired by a machine. Recent news reports relayed the escalating frustration he felt as his security pass stopped working, his computer system login was disabled, and at last he was frogmarched from the constructing by security personnel. His managers were unable to supply an evidence, and powerless to overrule the system.

Some might think this was a taste of things to come back as artificial intelligence is given more power over our lives. Personally, I drew the alternative conclusion. Diallo was sacked because a previous manager hadn’t renewed his contract on the brand new computer system and various automated systems then clicked into motion. The problems weren’t brought on by AI, but by its absence.

The systems displayed no knowledge-based intelligence, meaning they didn’t have a model designed to encapsulate knowledge (reminiscent of human resources expertise) in the shape of rules, text and logical links. Equally, the systems showed no computational intelligence – the power to learn from datasets – reminiscent of recognising the aspects that may result in dismissal. In fact, it appears that evidently Diallo was fired in consequence of an old-fashioned and poorly designed system triggered by a human error. AI is definitely not in charge – and it could be the solution.

The conclusion I’d draw from this experience is that some human resources functions are ripe for automation by AI, especially as, on this case, dumb automation has shown itself to be so inflexible and ineffective. Most large organisations can have a personnel handbook that could be coded up as an automatic, expert system with explicit rules and models. Many corporations have created such systems in a variety of domains that involve specialist knowledge, not only in human resources.

But a more practical AI system could use a combination of techniques to make it smarter. The way the principles ought to be applied to the nuances of real situations could be learned from the corporate’s HR records, in the identical way common law legal systems like England’s use precedents set by previous cases. The system could revise its reasoning as more evidence became available in any given case using what’s referred to as “Bayesian updating”. An AI concept called “fuzzy logic” could interpret situations that aren’t black and white, applying evidence and conclusions in various degrees to avoid the type of stark decision-making that led to Diallo’s dismissal.

No more ‘computer says no’.
Shutterstock

The need for several approaches is typically neglected in the present wave of overenthusiasm for “deep learning” algorithms, complex artificial neural networks inspired by the human brain that may recognise patterns in large datasets. As that’s all they’ll do, some experts at the moment are arguing for a more balanced approach. Deep learning algorithms are great at pattern recognition, but they definitely don’t show deep understanding.

Using AI in this fashion would likely reduce errors and, after they did occur, the system could develop and share the teachings with corresponding AI in other corporations in order that similar mistakes are avoided in the long run. That is something that may’t be said for human solutions. An excellent human manager will learn from his or her mistakes, but the following manager is prone to repeat the identical errors.

So, what are the downsides? One of essentially the most striking features of Diallo’s experience is the dearth of humanity shown. A choice was made, albeit in error, but not communicated or explained. An AI may make fewer mistakes, but wouldn’t it be any higher at communicating its decisions? I believe the reply might be not.

Losing your job and livelihood is a stressful and emotional moment for anyone but essentially the most frivolous employees. It is a moment when sensitivity and understanding are required. So, I for one would definitely find human contact essential, irrespective of how convincing the AI chatbot.

A sacked worker may feel that they’ve been wronged and might need to challenge the choice through a tribunal. That situation raises the query of who was accountable for the unique decision and who will defend it in law. Now is unquestionably the moment to handle the legal and ethical questions posed by the rise of AI, while it continues to be in its infancy.

This article was originally published at theconversation.com