As public concern concerning the ethical and social implications of artificial intelligence keeps growing, it’d appear to be it’s time to decelerate. But inside tech corporations themselves, the sentiment is sort of the other. As Big Tech’s AI race heats up, it could be an “absolutely fatal error on this moment to fret about things that will be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

In other words, it’s time to “move fast and break things,” to cite Mark Zuckerberg’s old motto. Of course, if you break things, you may should fix them later – at a price.

In software development, the term “technical debt” refers back to the implied cost of creating future fixes as a consequence of selecting faster, less careful solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that after it does hit the market, you’ll discover what the bugs are and may hopefully fix them then.

However, negative news stories about generative AI tend to not be about these sorts of bugs. Instead, much of the priority is about AI systems amplifying harmful biases and stereotypes and students using AI deceptively. We hear about privacy concerns, people being fooled by misinformation, labor exploitation and fears about how quickly human jobs may get replaced, to call just a few. These problems will not be software glitches. Realizing that a technology reinforces oppression or bias may be very different from learning that a button on an internet site doesn’t work.

As a technology ethics educator and researcher, I actually have thought loads about these sorts of “bugs.” What’s accruing here isn’t just technical debt, but ethical debt. Just as technical debt may result from limited testing through the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt specifically, the individuals who incur it are rarely the individuals who pay for it ultimately.

Off to the races

As soon as OpenAI’s ChatGPT was released in November 2022, the starter pistol for today’s AI race, I imagined the debt ledger beginning to fill.

Within months, Google and Microsoft released their very own generative AI programs, which seemed rushed to market in an effort to maintain up. Google’s stock prices fell when its chatbot Bard confidently supplied a improper answer through the company’s own demo. One might expect Microsoft to be particularly cautious in relation to chatbots, considering Tay, its Twitter-based bot that was almost immediately shut down in 2016 after spouting misogynist and white supremacist talking points. Yet early conversations with the AI-powered Bing left some users unsettled, and it has repeated known misinformation.

Not all AI-generated writing is so delightful.
Smith Collection/Gado/Archive Photos via Getty Images

When the social debt of those rushed releases comes due, I expect that we’ll hear mention of unintended or unanticipated consequences. After all, even with ethical guidelines in place, it’s not as if OpenAI, Microsoft or Google can see the longer term. How can someone know what societal problems might emerge before the technology is even fully developed?

The root of this dilemma is uncertainty, which is a typical side effect of many technological revolutions, but magnified within the case of artificial intelligence. After all, a part of the purpose of AI is that its actions will not be known upfront. AI will not be designed to provide negative consequences, but it surely is designed to provide the unexpected.

However, it’s disingenuous to suggest that technologists cannot accurately speculate about what a lot of these consequences is likely to be. By now, there have been countless examples of how AI can reproduce bias and exacerbate social inequities, but these problems are rarely publicly identified by tech corporations themselves. It was external researchers who found racial bias in widely used business facial evaluation systems, for instance, and in a medical risk prediction algorithm that was being applied to around 200 million Americans. Academics and advocacy or research organizations just like the Algorithmic Justice League and the Distributed AI Research Institute are doing much of this work: identifying harms after the actual fact. And this pattern doesn’t seem prone to change if corporations keep firing ethicists.

Speculating – responsibly

I sometimes describe myself as a technology optimist who thinks and prepares like a pessimist. The only solution to decrease ethical debt is to take the time to think ahead about things that may go improper – but this isn’t something that technologists are necessarily taught to do.

Scientist and iconic science fiction author Isaac Asimov once said that sci-fi authors “foresee the inevitable, and although problems and catastrophes could also be inevitable, solutions will not be.” Of course, science fiction writers don’t are likely to be tasked with developing these solutions – but without delay, the technologists developing AI are.

So how can AI designers learn to think more like science fiction writers? One of my current research projects focuses on developing ways to support this technique of ethical speculation. I don’t mean designing with far-off robot wars in mind; I mean the power to think about future consequences in any respect, including within the very near future.

Half a dozen students, including one in a hijab, chat at long tables, with a professor at the back of the photo.
Learning to take a position about tech’s consequences – not only for tomorrow, but for the here and now.
Maskot/Getty Images

This is a subject I’ve been exploring in my teaching for a while, encouraging students to think through the moral implications of sci-fi technology to be able to prepare them to do the identical with technology they could create. One exercise I developed known as the Black Mirror Writers Room, where students speculate about possible negative consequences of technology like social media algorithms and self-driving cars. Often these discussions are based on patterns from the past or the potential for bad actors.

Ph.D. candidate Shamika Klassen and I evaluated this teaching exercise in a research study and located that there are pedagogical advantages to encouraging computing students to assume what might go improper in the longer term – after which brainstorm about how we’d avoid that future in the primary place.

However, the aim isn’t to arrange students for those far-flung futures; it’s to show speculation as a skill that will be applied immediately. This skill is very necessary for helping students imagine harm to other people, since technological harms often disproportionately impact marginalized groups which can be underrepresented in computing professions. The next steps for my research are to translate these ethical speculation strategies for real-world technology design teams.

Time to hit pause?

In March 2023, an open letter with 1000’s of signatures advocated for pausing training AI systems more powerful than GPT-4. Unchecked, AI development “might eventually outnumber, outsmart, obsolete and replace us,” and even cause a “lack of control of our civilization,” its writers warned.

As critiques of the letter indicate, this concentrate on hypothetical risks ignores actual harms happening today. Nevertheless, I believe there may be little disagreement amongst AI ethicists that AI development must decelerate – that developers throwing up their hands and citing “unintended consequences” isn’t going to chop it.

We are only just a few months into the “AI race” picking up significant speed, and I believe it’s already clear that ethical considerations are being left within the dust. But the debt will come due eventually – and history suggests that Big Tech executives and investors will not be those paying for it.


This article was originally published at theconversation.com