The next major cyberattack could involve artificial intelligence systems. It could even occur soon: At a recent cybersecurity conference, 62 industry professionals, out of the 100 questioned, said they thought the primary AI-enhanced cyberattack could are available in the subsequent 12 months.

This doesn’t mean robots will probably be marching down Main Street. Rather, artificial intelligence will make existing cyberattack efforts – things like identity theft, denial-of-service attacks and password cracking – more powerful and more efficient. This is dangerous enough – this sort of hacking can steal money, cause emotional harm and even injure or kill people. Larger attacks can cut power to a whole bunch of 1000’s of individuals, shut down hospitals and even affect national security.

As a scholar who has studied AI decision-making, I can let you know that interpreting human actions remains to be difficult for AI’s and that humans don’t really trust AI systems to make major decisions. So, unlike in the flicks, the capabilities AI could bring to cyberattacks – and cyberdefense – usually are not likely to right away involve computers selecting targets and attacking them on their very own. People will still must create attack AI systems, and launch them at particular targets. But nevertheless, adding AI to today’s cybercrime and cybersecurity world will escalate what’s already a rapidly changing arms race between attackers and defenders.

Faster attacks

Beyond computers’ lack of need for food and sleep – needs that limit human hackers’ efforts, even once they work in teams – automation could make complex attacks much faster and simpler.

To date, the results of automation have been limited. Very rudimentary AI-like capabilities have for a long time given virus programs the flexibility to self-replicate, spreading from computer to computer without specific human instructions. In addition, programmers have used their skills to automate different elements of hacking efforts. Distributed attacks, for instance, involve triggering a distant program on several computers or devices to overwhelm servers. The attack that shut down large sections of the web in October 2016 used this sort of approach. In some cases, common attacks are made available as a script that permits an unsophisticated user to decide on a goal and launch an attack against it.

AI, nevertheless, could help human cybercriminals customize attacks. Spearphishing attacks, as an example, require attackers to have personal details about prospective targets, details like where they bank or what medical insurance company they use. AI systems may help gather, organize and process large databases to attach identifying information, making this sort of attack easier and faster to perform. That reduced workload may drive thieves to launch numerous smaller attacks that go unnoticed for an extended time period – if detected in any respect – as a result of their more limited impact.

AI systems could even be used to tug information together from multiple sources to discover individuals who could be particularly vulnerable to attack. Someone who’s hospitalized or in a nursing home, for instance, won’t notice money missing out of their account until long after the thief has gotten away.

Improved adaptation

AI-enabled attackers may also be much faster to react once they encounter resistance, or when cybersecurity experts fix weaknesses that had previously allowed entry by unauthorized users. The AI may find a way to take advantage of one other vulnerability, or start scanning for brand spanking new ways into the system – without waiting for human instructions.

This could mean that human responders and defenders find themselves unable to maintain up with the speed of incoming attacks. It may lead to a programming and technological arms race, with defenders developing AI assistants to discover and protect against attacks – or maybe even AI’s with retaliatory attack capabilities.

Avoiding the hazards

Operating autonomously may lead AI systems to attack a system it shouldn’t, or cause unexpected damage. For example, software began by an attacker intending only to steal money might resolve to focus on a hospital computer in a way that causes human injury or death. The potential for unmanned aerial vehicles to operate autonomously has raised similar questions of the necessity for humans to make the choices about targets.

The consequences and implications are significant, but most individuals won’t notice an enormous change when the primary AI attack is unleashed. For most of those affected, the final result will probably be the identical as human-triggered attacks. But as we proceed to fill our homes, factories, offices and roads with internet-connected robotic systems, the potential effects of an attack by artificial intelligence only grows.

This article was originally published at theconversation.com