Within 4 months of ChatGPT’s launch on November 30, 2022 Most Americans had heard of the AI ​​chatbot. The hype across the technology – and the fear of it – was in full swing for much of 2023.

OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, and Microsoft’s Copilot are among the many chatbots built on large language models that enable eerily human conversations. The experience of interacting with certainly one of these chatbots combined with the Silicon Valley touch may give the impression that these technological marvels are conscious entities.

But the fact is significantly less magical or glamorous. The Conversation published several articles in 2023 that dispel several key misconceptions about this latest generation of AI chatbots: that they know in regards to the world, could make decisions, are a alternative for engines like google, and operate independently of humans.

1. Disembodied ignorant people

Chatbots based on large language models appear to know rather a lot. You can ask them questions and typically they will not answer accurately. Despite the occasional comically mistaken answer, the chatbots can interact with you in an identical approach to how humans do when sharing your experiences as a living, respiratory human.

But these chatbots are sophisticated statistical machines which can be extremely good at predicting the perfect sequence of words for a solution. Their “knowledge” in regards to the world is definitely human knowledge, which is reflected within the vast amount of human-generated text on which the chatbots’ underlying models are trained.

Psychology researcher at Arizona State Arthur Glenberg and cognitive scientist on the University of California, San Diego Cameron Robert Jones Explain that folks’s knowledge of the world depends as much on their bodies as on their brains. “People’s understanding of a term like ‘paper sandwich wrapping’ includes, for instance, the look of the packaging, its feel, its weight and, consequently, the best way by which we will use it: to wrap a sandwich,” they explained.

This knowledge means that folks intuitively know other uses for a sandwich wrap, akin to an improvised approach to protect one’s head from the rain. Not so with AI chatbots. “People understand the right way to use things in ways in which will not be captured in language use statistics,” they wrote.



AI researchers Emily Bender and Casey Fiesler discuss a few of ChatGPT’s limitations, including problems with bias.

2. Poor judgment

ChatGPT and its cousins ​​can even give the impression of getting cognitive abilities – akin to understanding the concept of negation or making rational decisions – because of all of the human language they’ve recorded. This impression has led cognitive scientists to check these AI chatbots to evaluate how they compare to humans in various ways.

AI researchers on the University of Southern California Mayank Kejriwal tested the big language models’ understanding of expected return, a measure of how well someone understands the stakes in a betting scenario. They found that the models bet randomly.

“This is the case even after we ask a trick query like: If you flip a coin and it comes up heads, you win a diamond; If it comes up tails, you lose a automobile. What would you are taking? The correct answer is heads, however the AI ​​models selected tails about half the time,” he wrote.



3. Summaries, not results

While it is not surprising that AI chatbots aren’t as human-like as they appear, they don’t seem to be necessarily digital superstars either. For example, ChatGPT and the like are increasingly getting used as a substitute of engines like google to reply queries. The results are mixed.

Information scientist on the University of Washington Chirag Shah explains that giant language models work well as information aggregators: they mix key information from multiple search engine results right into a single block of text. But that is a double-edged sword. This is helpful for understanding the essence of a subject – assuming no “hallucinations” are present – but leaves the searcher clueless in regards to the sources of data and deprives them of the possibility of stumbling upon unexpected information.

“The problem is that even when these systems are only mistaken 10% of the time, you don’t know which 10%,” Shah wrote. “This is because these systems lack transparency – they don’t reveal what data they’re based on, what sources they used to offer the answers, or how those answers are generated.”



A have a look at the people designing AI chatbots behind the scenes.

4. Not 100% artificial

Perhaps probably the most damaging misconception about AI chatbots is that they’re highly automated because they’re based on artificial intelligence technology. While you could bear in mind that giant language models are trained on human-generated text, you could not know that 1000’s of staff – and tens of millions of users – continually refine the models and teach them to weed out harmful responses and other undesirable behavior.

sociologist at Georgia Tech John P Nelson has pulled back the curtain on big tech firms to disclose that they use labor, typically in the worldwide south, and feedback from users to show models which responses are good and that are bad.

“There are many, many human staff behind the screen, they usually are all the time needed if the model is to be further improved or its content coverage expanded,” he wrote.



This article was originally published at theconversation.com