Given the large problem-solving potential of artificial intelligence (AI), it could not be unreasonable to consider that AI could also help us cope with the climate crisis. However, once you have a look at the energy requirements of AI models, it becomes clear that the technology is as much a part of the climate problem because it is an answer.

The emissions come from AI-related infrastructure, resembling the development and operation of the information centers that process the massive amounts of knowledge required to take care of these systems.

But different technological approaches to constructing AI systems could help reduce the carbon footprint. Two technologies specifically are promising for this: Spike neural networks and lifelong learning.

The lifespan of an AI system will be divided into two phases: training and inference. During training, a relevant data set is used to construct and optimize or improve the system. During inference, the trained system generates predictions based on previously unseen data.

For example, to coach an AI to be utilized in self-driving cars would require a knowledge set with many various driving scenarios and decisions made by human drivers.

After the training phase, the AI ​​system will predict effective maneuvers for a self-driving automobile. Artificial neural networks (ANN)are an underlying technology utilized in most current AI systems.

They consist of many various elements, so-called parameters, whose values ​​are adjusted through the training phase of the AI ​​system. These parameters can total greater than 100 billion.

While a lot of parameters improves the capabilities of ANNs, additionally they make training and inference processes resource-intensive processes. To put things into perspective, training GPT-3 (the precursor AI system to the present ChatGPT) produced 502 tons of carbon, the equivalent of driving 112 gasoline-powered cars for a 12 months.

GPT-3 continues to emit 8.4 tons of CO₂ per 12 months based on the conclusion. Since the AI ​​boom began within the early 2010s, the energy demands of AI systems generally known as Large Language Models (LLMs) – the sort of technology behind ChatGPT – have increased by an element of 300,000.

As AI models change into more ubiquitous and complicated, this trend will proceed, potentially making AI a major contributor to CO₂ emissions. In fact, our current estimates could possibly be lower than AI’s actual carbon footprint on account of the shortage of standardized and accurate techniques to measure AI-related emissions.


Leonid Sorokin / Shutterstock

Spiking neural networks

The previously mentioned emerging technologies, Spiking Neural Networks (SNNs) and Lifelong Learning (L2), have the potential to cut back AI’s ever-growing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.

ANNs work by processing and learning patterns from data to make predictions. You work with decimal numbers. To perform accurate calculations, especially when multiplying numbers with decimals together, the pc should be very precise. Because of those decimals, ANNs require a whole lot of computing power, memory and time.

This implies that the larger and more complex the networks change into, the more energy intensive ANNs change into. Both ANNs and SNNs are inspired by the brain, which comprises billions of neurons (nerve cells) connected to one another via synapses.

Like the brain, ANNs and SNNs even have components that researchers call neurons, although these are artificial and never biological. The predominant difference between the 2 kinds of neural networks is the best way individual neurons transmit information to one another.

Neurons within the human brain communicate with one another by transmitting intermittent electrical signals called spikes. The spikes themselves contain no information. Instead, the data lies within the timing of those peaks. This binary all-or-none characteristic of spikes (normally represented as 0 or 1) implies that neurons are lively after they produce spikes and inactive otherwise.

This is one in all the the reason why energy-efficient processing within the brain.

Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timing of spikes to process and transmit information. So while the bogus neurons in ANNs are at all times lively, SNNs only eat energy when a spike occurs.

Otherwise the energy requirement is sort of zero. SNNs will be as much as 280 times more energy efficient than ANNs.

It is my colleagues and I Development of learning algorithms for SNNs This could bring them even closer to the energy efficiency of the brain. The lower computational cost also implies that SNNs may have the option to make decisions more quickly.

These properties make SNNs useful for a wide selection of applications, including space exploration, defense and self-driving cars on account of the limited energy sources available in these scenarios.

L2 is one other strategy to cut back the general energy consumption of ANNs over their lifetime that we’re also working on.

Training ANNs sequentially (where the systems learn from sequences of knowledge) on recent problems causes them to forget their previous knowledge when learning recent tasks. ANNs must be retrained from scratch as their operating environment changes, further increasing AI-related emissions.

L2 is a set of algorithms that make it possible to coach AI models on multiple tasks one after the opposite without forgetting anything. L2 allows models to do that learn throughout their lives by constructing on their existing knowledge without having to retrain them from scratch.

The field of AI is growing rapidly, and more potential advances are emerging that may reduce the energy requirements of this technology. For example, smaller AI models will be created which have the identical predictive capabilities as a bigger model.

Advances in quantum computing – a special approach to constructing computers that leverages phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities of quantum computing could allow us to search out energy-efficient solutions for AI on a much larger scale.

The challenge of climate change requires us to try to search out solutions for fast-moving areas like AI before their carbon footprint becomes too large.

This article was originally published at theconversation.com