If you’re like me, you’ve spent numerous time over the past few months attempting to work out what this AI thing is all about. Large-language models, generative AI, algorithmic bias – it’s quite a bit for the less tech-savvy of us to sort out, attempting to make sense of the myriad headlines about artificial intelligence swirling about.

But understanding how AI works is just a part of the dilemma. As a society, we’re also confronting concerns about its social, psychological and ethical effects. Here we highlight articles concerning the deeper questions the AI revolution raises about bias and inequality, the training process, its impact on jobs, and even the artistic process.

1. Ethical debt

When an organization rushes software to market, it often accrues “technical debt”: the associated fee of getting to repair bugs after a program is released, as an alternative of ironing them out beforehand.

There are examples of this in AI as firms race ahead to compete with one another. More alarming, though, is “ethical debt,” when development teams haven’t considered possible social or ethical harms – how AI could replace human jobs, for instance, or when algorithms find yourself reinforcing biases.

Casey Fiesler, a technology ethics expert on the University of Colorado Boulder, wrote that she’s “a technology optimist who thinks and prepares like a pessimist”: someone who puts in time speculating about what might go flawed.

That type of speculation is an especially useful skill for technologists trying to check consequences that may not impact them, Fiesler explained, but that might hurt “marginalized groups which are underrepresented” in tech fields. When it comes to moral debt, she noted, “the individuals who incur it are rarely the individuals who pay for it in the long run.”

2. Is anybody there?

AI programs’ abilities may give the impression that they’re sentient, but they’re not, explained Nir Eisikovits, director of the Applied Ethics Center on the University of Massachusetts Boston. “ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less,” he wrote.

But saying AI isn’t conscious doesn’t mean it’s harmless.

“To me,” Eisikovits explained, “the pressing query just isn’t whether machines are sentient but why it is really easy for us to assume that they’re.” Humans easily project human features onto absolutely anything, including technology. That tendency to anthropomorphize “points to real risks of psychological entanglement with technology,” in response to Eisikovits, who studies AI’s impact on how people understand themselves.

People give names to boats and cars – and may get attached to AI, too.
Yuichiro Chino/Moment via Getty Images

Considering how many individuals check with their pets and cars, it shouldn’t be a surprise that chatbots can come to mean a lot to individuals who engage with them. The next steps, though, are “strong guardrails” to forestall programs from making the most of that emotional connection.

3. Putting pen to paper

From the beginning, ChatGPT fueled parents’ and teachers’ fears about cheating. How could educators – or college admissions officers, for that matter – work out if an essay was written by a human or a chatbot?

But AI sparks more fundamental questions on writing, in response to Naomi Baron, an American University linguist who studies technology’s effects on language. AI’s potential threat to writing isn’t nearly honesty, but concerning the ability to think itself.

A woman with short hair, a necklace, and a short-sleeve dress smiles guardedly in a black and white photograph.
American author Flannery O’Connor sits with a replica of her novel ‘Wise Blood,’ published in 1952.
Apic/Hulton Archive via Getty Images

Baron pointed to novelist Flannery O’Connor’s remark that “I write because I don’t know what I believe until I read what I say.” In other words, writing isn’t only a solution to put your thoughts on paper; it’s a process to assist sort out your thoughts in the primary place.

AI text generation generally is a handy tool, Baron wrote, but “there’s a slippery slope between collaboration and encroachment.” As we wade right into a world of an increasing number of AI, it’s key to do not forget that “crafting written work ought to be a journey, not only a destination.”

4. The value of art

Generative AI programs don’t just produce text, but in addition complex images – which have even captured a prize or two. In theory, allowing AI to do nitty-gritty execution might release human artists’ big-picture creativity.

Not so fast, said Eisikovits and Alec Stubbs, who can also be a philosopher on the University of Massachusetts Boston. The finished object viewers appreciate is just a part of the method we call “art.” For creator and appreciator alike, what makes art invaluable is “the work of creating something real and dealing through its details”: the struggle to show ideas into something we will see.

This article was originally published at theconversation.com