Some of the most effective known examples of artificial intelligence are Siri and Alexa, which take heed to human speech, recognize words, perform searches and translate the text results back into speech. But these and other AI technologies raise necessary issues like personal privacy rights and whether machines can ever make fair decisions. As Congress considers whether to make laws governing how AI systems function in society, a congressional committee has highlighted concerns across the varieties of AI algorithms that perform specific – if complex – tasks.

Often called “narrow AI,” these devices’ capabilities are distinct from the still-hypothetical general AI machines, whose behavior could be virtually indistinguishable from human activity – more just like the “Star Wars” robots R2-D2, BB-8 and C-3PO. Other examples of narrow AI include AlphaGo, a pc program that recently beat a human at the sport of Go, and a medical device called OsteoDetect, which uses AI to assist doctors discover wrist fractures.

As a teacher and adviser of scholars researching the regulation of emerging technologies, I view the congressional report as a positive sign of how U.S. policymakers are approaching the unique challenges posed by AI technologies. Before attempting to craft regulations, officials and the general public alike need to raised understand AI’s effects on individuals and society basically.

Concerns raised by AI technology

Based on information gathered in a series of hearings on AI held throughout 2018, the report highlights the indisputable fact that the U.S. shouldn’t be a world leader in AI development. This has happened as a part of a broader trend. Funding for scientific research has decreased for the reason that early 2000s. In contrast, countries like China and Russia have boosted their spending on developing AI technologies.

Drones can monitor activities in public and on private land.
AP Photo/Keith Srakocic

As illustrated by the recent concerns surrounding Russia’s interference in U.S. and European elections, the event of ever more complex technologies raises concerns concerning the security and privacy of U.S. residents. AI systems can now be used to access personal information, make surveillance systems more efficient and fly drones. Overall, this offers corporations and governments recent and more comprehensive tools to watch and potentially spy on users.

Even though AI development is in its early stages, algorithms can already be easily used to mislead readers, social media users and even the general public basically. For instance, algorithms have been programmed to goal specific messages to receptive audiences or generate deepfakes, videos that may appear to present an individual, even a politician, saying or doing something they never actually did.

Of course, like many other technologies, the identical AI program may be used for each useful and malicious purposes. For instance, LipNet, an AI lip-reading program created on the University of Oxford, has a 93.4 percent accuracy rate. That’s far beyond the most effective human lip-readers, who’ve an accuracy rate between 20 and 60 percent. This is great news for individuals with hearing and speech impairments. At the identical time, this system may be used for broad surveillance purposes, and even to watch specific individuals.

AI technology may be biased, identical to humans

Some uses for AI could also be less obvious, even to the people using the technology. Lately, people have grow to be aware of biases in the information that powers AI programs. This has the potential to clash with generalized perceptions that a pc will impartially use data to make objective decisions. In reality, human-built algorithms will use imperfect data to make decisions that reflect human bias. Most crucially, the pc decision could also be presented as, and even believed to be, fairer than a call made by a human – when the truth is the other could also be true.

For instance, some courts use a program called COMPAS to come to a decision whether to release criminal defendants on bail. However, there’s evidence that this system is discriminating against black defendants, incorrectly rating them as more prone to commit future crimes than white defendants. Predictive technologies like this have gotten increasingly widespread. Banks use them to find out who gets a loan. Computer evaluation of police data purports to predict where criminal activity will occur. In many cases, these programs only reinforce existing bias as a substitute of eliminating it.

What’s next?

As policymakers begin to handle the numerous potential – for good and unwell – of artificial intelligence, they’ll must watch out to avoid stifling innovation. In my view, the congressional report is taking the precise steps on this regard. It calls for more investment in AI and for funding to be available to more agencies, from NASA to the National Institutes of Health. It also cautions legislators against stepping in too soon, creating too many regulatory hurdles for technologies which are still developing.

More importantly, though, I consider people should begin looking beyond the metrics suggesting that AI programs are functional, time-saving and powerful. The public should start broader conversations about how you can eliminate or lessen data bias because the technology moves on. If nothing else, adopters of algorithmic technology have to be made aware of the pitfalls of AI. Technologists could also be unable to develop algorithms which are fair in measurable ways, but people can grow to be savvier about how they work, what they’re good at – and what they’re not.

This article was originally published at theconversation.com