Information is a useful commodity. And because of technology, there are hundreds of thousands of terabytes of it online.

Artificial intelligence (AI) tools corresponding to ChatGPT are actually managing this information on our behalf – collating it, summarising it, and presenting it back to us.

But this “outsourcing” of knowledge management to AI – convenient because it is – comes with consequences. It can influence not only we predict, but potentially also we predict.

What happens in a world where AI algorithms determine what information is perpetuated, and what’s left by the wayside?

The rise of personalised AI

Generative AI tools are built on models trained on a whole bunch of gigabytes of preexisting data. From these data they learn tips on how to autonomously create text, images, audio and video content, and may reply to user queries by patching together the “most definitely” answer.

ChatGPT is utilized by hundreds of thousands of individuals, despite having been publicly released lower than a yr ago. In June, the addition of custom responses made the already-impressive chatbot much more useful. This feature lets users save customised instructions explaining what they’re using the bot for and the way they would love it to reply.

This is considered one of several examples of “personalised AI”: a category of AI tools that generate content to suit the precise needs and preferences of the user.

Another example is Meta’s recently launched virtual assistant, Meta AI. This chatbot can have conversations, generate images and perform tasks across Meta’s platforms including WhatsApp, Messenger and Instagram.

Artificial intelligence researcher and co-founder of DeepMind, Mustafa Suleyman, describes personalised AI as being more of a relationship than a technology:

It’s a friend. […] It’s really going to be ever present and alongside you, living with you – principally in your team. I like to consider it as like having an important coach in your corner.

But these technologies are also controversial, with concerns raised over data ownership, bias and misinformation.

Tech firms are attempting to search out ways to combat these issues. For instance, Google has added source links to AI-generated search summaries produced by its Search Generative Experience (SGE) tool, which got here under fire earlier this yr for offering up inaccurate and problematic responses.

Technology has already modified our considering

How will generative AI tools – and particularly those personalised to us – change how we predict?

To understand this, let’s revisit the early Nineties when the web first got here into our lives. People could suddenly access details about just about anything, whether that was banking, baking, teaching or travelling.

Nearly 30 years on, studies have shown how being connected to this global “hive mind” has modified our cognition, memory and creativity.

For instance, having instantaneous access to the equivalent of 305.5 billion pages of knowledge has increased people’s meta-knowledge – that’s, their knowledge about knowledge. One impact of that is the “Google effect”: a phenomenon through which online search increases our ability to search out information, but reduces our memory of what that information was.

On one hand, offloading our considering to engines like google has been shown to release our mental reserves for problem solving and artistic considering. On the opposite, online information retrieval has been related to increased distractibility and dependency.

Research also shows online searching – whatever the quantity or quality of knowledge retrieved – increases our cognitive self-esteem. In other words, it increases our belief in our own “smarts”.

Couple this with the proven fact that questioning information is effortful – and that the more we trust our search engine, the less we critically engage with its results – and you possibly can see why gaining access to unprecedented amounts of knowledge just isn’t necessarily making us wiser.



Should we be ‘outsourcing’ our considering?

Today’s generative AI tools go lots further than simply presenting us with search results. They locate the knowledge for us, evaluate it, synthesise it and present it back to us.

What might the implications of this be? Without pushing for human-led quality control, the outlook isn’t promising.

Generative AI’s ability to supply responses that feel familiar, objective and interesting means it leaves us more vulnerable to cognitive biases.

The automation bias, as an illustration, is the human tendency to overestimate the integrity of machine-sourced information. And the mere exposure effect is after we’re more more likely to trust information that’s presented as familiar or personal.

Research on social media can assist us understand the impact of such biases. In one 2016 study, Facebook users reported feeling more “within the know” based on the amount of reports content posted online – and never how much of it they really read.

We also know that “filter bubbles” created by social media algorithms – wherein our feeds are filtered in line with our interests — limit the variety of the content we’re exposed to.

This means of information narrowing has been shown to extend ideological polarisation by reducing people’s propensity to contemplate alternative perspectives. It’s also been shown to extend our likelihood of being exposed to fake news.

Using AI to clever up, and never dumb down

Generative AI is, for sure, a revolutionary force with the potential to do great things for society. It could reshape our education system by providing personalised content, change our work practices by expediting writing and data evaluation, and push the frontiers of scientific discovery.

It even has the potential to positively alter our relationships by helping us communicate and connect with others and may, at times, function as a type of synthetic companionship.

But if our only strategy to judge the longer term is by seeking to the past, perhaps now could be the time to reflect on how each the web and social media have modified our cognition, and apply some precautionary measures. Developing AI literacy is place to start out, as is designing AI tools that encourage human autonomy and significant considering.

Ultimately, we’ll need to grasp each our own and AI’s strengths and weaknesses to make sure these “considering” companions help us create the longer term we would like – and never the one which happens to be at the highest of the list.

This article was originally published at theconversation.com