Mainstream conversations about artificial intelligence (AI) have been dominated by just a few key concerns, akin to whether superintelligent AI will wipe us out, or whether AI will steal our jobs. But we’ve paid less attention the varied other environmental and social impacts of our “consumption” of AI, that are arguably just as necessary.

Everything we devour has associated “externalities” – the indirect impacts of our consumption. For instance, industrial pollution is a widely known externality that has a negative impact on people and the environment.

The online services we use day by day even have externalities, but there appears to be a much lower level of public awareness of those. Given the large uptake in the usage of AI, these aspects mustn’t be neglected.

Environmental impacts of AI use

In 2019, French think tank The Shift Project estimated that the usage of digital technologies produces more carbon emissions than the aviation industry. And although AI is currently estimated to contribute lower than 1% of total carbon emissions, the AI market size is predicted to grow ninefold by 2030.

Tools akin to ChatGPT are built on advanced computational systems called large language models (LLMs). Although we access these models online, they’re run and trained in physical data centres around the globe that devour significant resources.

Last 12 months, AI company Hugging Face published an estimate of the carbon footprint of its own LLM called BLOOM (a model of comparable complexity to OpenAI’s GPT-3).

Accounting for the impact of raw material extraction, manufacturing, training, deployment and end-of-life disposal, the model’s development and usage resulted within the equivalent of 60 flights from New York to London.

Hugging Face also estimated GPT-3’s life cycle would lead to ten times greater emissions, because the data centres powering it run on a more carbon-intensive grid. This is without considering the raw material, manufacturing and disposal impacts related to GTP-3.

OpenAI’s latest LLM offering, GPT-4, is rumoured to have trillions of parameters and potentially far greater energy usage.

Beyond this, running AI models requires large amounts of water. Data centres use water towers to chill the on-site servers where AI models are trained and deployed. Google recently got here under fire for plans to construct a brand new data centre in drought-stricken Uruguay that may use 7.6 million litres of water every day to chill its servers, based on the nation’s Ministry of Environment (although the Minister for Industry has contested the figures). Water can be needed to generate electricity used to run data centres.

In a preprint published this 12 months, Pengfei Li and colleagues presented a technique for gauging the water footprint of AI models. They did this in response to a scarcity of transparency in how corporations evaluate the water footprint related to using and training AI.

They estimate training GPT-3 required somewhere between 210,000 and 700,000 litres of water (the equivalent of that used to supply between 300 and 1,000 cars). For a conversation with 20 to 50 questions, ChatGPT was estimated to “drink” the equivalent of a 500 millilitre bottle of water.

Social impacts of AI use

LLMs often need extensive human input in the course of the training phase. This is often outsourced to independent contractors who face precarious work conditions in low-income countries, resulting in “digital sweatshop” criticisms.

In January, Time reported on how Kenyan staff contracted to label text data for ChatGPT’s “toxicity” detection were paid lower than US$2 per hour while being exposed to explicit and traumatic content.

LLMs will also be used to generate fake news and propaganda. Left unchecked, AI has the potential for use to control public opinion, and by extension could undermine democratic processes. In a recent experiment, researchers at Stanford University found AI-generated messages were consistently persuasive to human readers on topical issues akin to carbon taxes and banning assault weapons.

Not everyone will have the opportunity to adapt to the AI boom. The large-scale adoption of AI has the potential to worsen global wealth inequality. It won’t only cause significant disruptions to the job market – but could particularly marginalise staff from certain backgrounds and in specific industries.

Are there solutions?

The way AI impacts us over time will depend upon myriad aspects. Future generative AI models be designed to make use of significantly less energy, nevertheless it’s hard to say whether they will probably be.

When it involves data centres, the placement of the centres, the form of power generation they use, and the time of day they’re used can significantly impact their overall energy and water consumption. Optimising these computing resources could lead to significant reductions. Companies including Google, Hugging Face and Microsoft have championed the role their AI and cloud services can play in managing resource usage to attain efficiency gains.

Also, as direct or indirect consumers of AI services, it’s necessary we’re all aware that each chatbot query and image generation leads to water and energy use, and will have implications for human labour.

AI’s growing popularity might eventually trigger the event of sustainability standards and certifications. These would help users understand and compare the impacts of specific AI services, allowing them to decide on those which have been certified. This could be much like the Climate Neutral Data Centre Pact, wherein European data centre operators have agreed to make data centres climate neutral by 2030.

Governments may also play a component. The European Parliament has approved draft laws to mitigate the risks of AI usage. And earlier this 12 months, the US senate heard testimonies from a variety of experts on how AI is perhaps effectively regulated and its harms minimised. China has also published rules on the usage of generative AI, requiring security assessments for products offering services to the general public.



This article was originally published at theconversation.com