Some individuals are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity – or perhaps exterminating us. These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology must be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook’s Mark Zuckerberg disagree, saying the technology is just not nearly advanced enough for those worries to be realistic.

As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I’ve seen how helpful it will probably be. I’ve developed AI software that lets robots working in teams make individual decisions, as a part of collective efforts to explore and solve problems. Researchers are already subject to existing rules, regulations and laws designed to guard public safety. Imposing further limitations risks reducing the potential for innovation with AI systems.

How is AI regulated now?

While the term “artificial intelligence” may conjure fantastical images of human-like robots, most individuals have encountered AI before. It helps us find similar products while shopping, offers movie and TV recommendations and helps us seek for web sites. It grades student writing, provides personalized tutoring and even recognizes objects carried through airport scanners.

In each case, the AI makes things easier for humans. For example, the AI software I developed might be used to plan and execute a search of a field for a plant or animal as a part of a science experiment. But whilst the AI frees people from doing this work, it remains to be basing its actions on human decisions and goals about where to look and what to search for.

In areas like these and plenty of others, AI has the potential to do way more good than harm – if used properly. But I don’t consider additional regulations are currently needed. There are already laws on the books of countries, states and towns governing civil and criminal liabilities for harmful actions. Our drones, for instance, must obey FAA regulations, while the self-driving automotive AI must obey regular traffic laws to operate on public roadways.

Existing laws also cover what happens if a robot injures or kills an individual, even when the injury is accidental and the robot’s programmer or operator isn’t criminally responsible. While lawmakers and regulators may have to refine responsibility for AI systems’ actions as technology advances, creating regulations beyond people who exist already could prohibit or slow the event of capabilities that might be overwhelmingly helpful.

Potential risks from artificial intelligence

It could seem reasonable to fret about researchers developing very advanced artificial intelligence systems that may operate entirely outside human control. A typical thought experiment deals with a self-driving automotive forced to make a call about whether to run over a baby who just stepped into the road or veer off right into a guardrail, injuring the automotive’s occupants and maybe even those in one other vehicle.

Musk and Hawking, amongst others, worry that hypercapable AI systems, not limited to a single set of tasks like controlling a self-driving automotive, might determine it doesn’t need humans anymore. It might even have a look at human stewardship of the planet, the interpersonal conflicts, theft, fraud and frequent wars, and judge that the world can be higher without people.

Science fiction creator Isaac Asimov tried to deal with this potential by proposing three laws limiting robot decision-making: Robots cannot injure humans or allow them “to return to harm.” They must also obey humans – unless this might harm humans – and protect themselves, so long as this doesn’t harm humans or ignore an order.

But Asimov himself knew the three laws weren’t enough. And they don’t reflect the complexity of human values. What constitutes “harm” is an example: Should a robot protect humanity from suffering related to overpopulation, or should it protect individuals’ freedoms to make personal reproductive decisions?

We humans have already wrestled with these questions in our own, nonartificial intelligences. Researchers have proposed restrictions on human freedoms, including reducing reproduction, to manage people’s behavior, population growth and environmental damage. In general, society has decided against using those methods, even when their goals seem reasonable. Similarly, reasonably than regulating what AI systems can and might’t do, in my opinion it could be higher to teach them human ethics and values – like parents do with human children.

Artificial intelligence advantages

People already profit from AI daily – but that is only the start. AI-controlled robots could assist law enforcement in responding to human gunmen. Current police efforts must deal with stopping officers from being injured, but robots could step into harm’s way, potentially changing the outcomes of cases just like the recent shooting of an armed college student at Georgia Tech and an unarmed highschool student in Austin.

Intelligent robots may also help humans in other ways, too. They can perform repetitive tasks, like processing sensor data, where human boredom may cause mistakes. They can limit human exposure to dangerous materials and dangerous situations, corresponding to when decontaminating a nuclear reactor, working in areas humans can’t go. In general, AI robots can provide humans with more time to pursue whatever they define as happiness by freeing them from having to do other work.

Achieving most of those advantages would require rather a lot more research and development. Regulations that make it costlier to develop AIs or prevent certain uses may delay or forestall those efforts. This is especially true for small businesses and individuals – key drivers of recent technologies – who will not be as well equipped to cope with regulation compliance as larger corporations. In fact, the most important beneficiary of AI regulation could also be large corporations which can be used to coping with it, because startups can have a harder time competing in a regulated environment.

The need for innovation

Humanity faced the same set of issues within the early days of the web. But the United States actively avoided regulating the web to avoid stunting its early growth. Musk’s PayPal and diverse other businesses helped construct the fashionable online world while subject only to regular human-scale rules, like those stopping theft and fraud.

Artificial intelligence systems have the potential to alter how humans do exactly about every thing. Scientists, engineers, programmers and entrepreneurs need time to develop the technologies – and deliver their advantages. Their work ought to be free from concern that some AIs may be banned, and from the delays and costs related to recent AI-specific regulations.

This article was originally published at theconversation.com