Artificial intelligence-powered (AI) chatbots have gotten increasingly human-like by design, to the purpose that some amongst us may struggle to tell apart between human and machine.

This week, Snapchat’s My AI chatbot glitched and posted a story of what looked like a wall and ceiling, before it stopped responding to users. Naturally, the web began to query whether the ChatGPT-powered chatbot had gained sentience.

A crash course in AI literacy could have quelled this confusion. But, beyond that, the incident reminds us that as AI chatbots grow closer to resembling humans, managing their uptake will only get tougher – and more necessary.

From rules-based to adaptive chatbots

Since ChatGPT burst onto our screens late last yr, many digital platforms have integrated AI into their services. Even as I draft this text on Microsoft Word, the software’s predictive AI capability is suggesting possible sentence completions.



Known as generative AI, this relatively latest variety of AI is distinguished from its predecessors by its ability to generate latest content that’s precise, human-like and seemingly meaningful.

Generative AI tools, including AI image generators and chatbots, are built on large language models (LLMs). These computational models analyse the associations between billions of words, sentences and paragraphs to predict what ought to come back next in a given text. As OpenAI co-founder Ilya Sutskever puts it, an LLM is

[…] just a very, really good next-word predictor.

Advanced LLMs are also fine-tuned with human feedback. This training, often delivered through countless hours of low-cost human labour, is the explanation AI chatbots can now have seemingly human-like conversations.

OpenAI’s ChatGPT continues to be the flagship generative AI model. Its release marked a significant leap from simpler “rules-based” chatbots, comparable to those utilized in online customer support.

Human-like chatbots that talk a user quite than them have been linked with higher levels of engagement. One study found the personification of chatbots results in increased engagement which, over time, may turn into psychological
dependence. Another study involving stressed participants found a human-like chatbot was more prone to be perceived as competent, and subsequently more prone to help reduce participants’ stress.

These chatbots have also been effective in fulfilling organisational objectives in various settings, including retail, education, workplace and healthcare settings.



Google is using generative AI to construct a “personal life coach” that may supposedly help individuals with various personal and skilled tasks, including providing life advice and answering intimate questions.

This is despite Google’s own AI safety experts warning that users could grow too dependant on AI and will experience “diminished health and wellbeing” and a “lack of agency” in the event that they take life advice from it.

Friend or foe – or simply a bot?

In the recent Snapchat incident, the corporate put the entire thing all the way down to a “temporary outage”. We may never know what actually happened; it may very well be one more example of AI “hallucinatng”, or the results of a cyberattack, and even just an operational error.

Either way, the speed with which some users assumed the chatbot had achieved sentience suggests we’re seeing an unprecedented anthropomorphism of AI. It’s compounded by a scarcity of transparency from developers, and a scarcity of basic understanding amongst the general public.

We shouldn’t underestimate how individuals could also be misled by the apparent authenticity of human-like chatbots.

Earlier this yr, a Belgian man’s suicide was attributed to conversations he’d had with a chatbot about climate inaction and the planet’s future. In one other example, a chatbot named Tessa was found to be offering harmful advice to people through an eating disorder helpline.

Chatbots could also be particularly harmful to the more vulnerable amongst us, and particularly to those with psychological conditions.

A brand new uncanny valley?

You can have heard of the “uncanny valley” effect. It refers to that uneasy feeling you get once you see a humanoid robot that looks human, but its slight imperfections give it away, and it finally ends up being creepy.

It seems the same experience is emerging in our interactions with human-like chatbots. A slight blip can raise the hairs on the back of the neck.

One solution may be to lose the human edge and revert to chatbots which are straightforward, objective and factual. But this may come on the expense of engagement and innovation.

Education and transparency are key

Even the developers of advanced AI chatbots often can’t explain how they work. Yet in some ways (and so far as business entities are concerned) the advantages outweigh the risks.

Generative AI has demonstrated its usefulness in big-ticket items comparable to productivity, healthcare, education and even social equity. It’s unlikely to go away. So how will we make it work for us?

Since 2018, there was a big push for governments and organisations to deal with the risks of AI. But applying responsible standards and regulations to a technology that’s more “human-like” than some other comes with a number of challenges.

Currently, there isn’t any legal requirement for Australian businesses to reveal using chatbots. In the US, California has introduced a “bot bill” that may require this, but legal experts have poked holes in it – and the bill has yet to be enforced on the time of writing this text.

Moreover, ChatGPT and similar chatbots are made public as “research previews”. This means they often include multiple disclosures on their prototypical nature, and the onus for responsible use falls on the user.

The European Union’s AI Act, the world’s first comprehensive regulation on AI, has identified moderate regulation and education as the trail forward – since excess regulation could stunt innovation. Similar to digital literacy, AI literacy ought to be mandated in schools, universities and organisations, and also needs to be made free and accessible for the general public.



This article was originally published at theconversation.com