This story was published in partnership with The Markup, a nonprofit, investigative newsroom that challenges technology to serve the public good. Sign up for its newsletters here.

“Something’s fishy,” declared a March newsletter from the right-wing, fossil fuel-funded think tank Texas Public Policy Foundation. The caption looms under an imposing image of a stranded whale on a beach, with three huge offshore wind turbines in the background. 

Help Grist raise $25,000 by September 30 to further advance our climate reporting

Something truly was fishy about that image. It’s not because offshore wind causes whale deaths, a groundless conspiracy pushed by fossil fuel interests that the image attempts to bolster. It’s because, as Gizmodo writer Molly Taft reported, the photo was fabricated using artificial intelligence. Along with eerily pixelated sand, oddly curved beach debris, and mistakenly fused together wind turbine blades, the picture also retains a tell-tale rainbow watermark from the artificially intelligent image generator DALL-E. 

DALL-E is one of countless AI models that have risen to otherworldly levels of popularity, particularly in the last year. But as hundreds of millions of users marvel at AI’s ability to produce novel images and believable text, the current wave of hype has concealed how AI could be hindering our ability to make progress on climate change.  

Grist thanks its sponsors. Become one.

Advocates argue that these impacts — which include vast carbon emissions associated with the electricity needed to run the models, a pervasive use of AI in the oil and gas industry to boost fossil fuel extraction, and a worrying uptick in the output of misinformation — are flying under the radar. While many prominent researchers and investors have stoked fears around AI’s “godlike” technological force or potential to end civilization, a slew of real-world consequences aren’t getting the attention they deserve. 

Many of these harms extend far beyond climate issues, including algorithmic racism, copyright infringement, and exploitative working conditions for data workers who help develop AI models. “We see technology as an inevitability and don’t think about shaping it with societal impacts in mind,” David Rolnick, a computer science professor at McGill University and a co-founder of the nonprofit Climate Change AI, told Grist.

But the effects of AI, including its impact on our climate and efforts to curtail climate change, are anything but inevitable. Experts say we can and should confront these harms — but first, we need to understand them.

Large AI models produce an unknown amount of emissions

At its core, AI is essentially “a marketing term,” the Federal Trade Commission stated back in February. There is no absolute definition for what an AI technology is. But usually, as Amba Kak, the executive director of the AI Now Institute, describes, AI refers to algorithms that process large amounts of data to perform tasks like generating text or images, making predictions, or calculating scores and rankings. 

Grist thanks its sponsors. Become one.

That higher computational capacity means large AI models gobble up large quantities of computing power in its development and use. Take ChatGPT, for instance, the OpenAI chatbot that has gone viral for producing convincing, humanlike text. Researchers estimated that the training of ChatGPT-3, the predecessor to this year’s GPT-4, emitted 552 tons of carbon dioxide equivalent — equal to more than three round-trip flights between San Francisco and New York. Total emissions are likely much higher, since that number only accounts for training ChatGPT-3 one time through. In practice, models can be retrained thousands of times while they are being built. 

OpenAI CEO Sam Altman speaks at Keio University in Tokyo on June 12. Tomohiro Ohsumi / Getty Images

The estimate also does not include energy consumed when ChatGPT is used by approximately 13 million people each day. Researchers highlight that actually using a trained model can make up 90 percent of energy use associated with an AI machine-learning model. And the newest version of ChatGPT, GPT-4, likely requires far more computing power because it is a much larger model.

No clear data exists on exactly how many emissions result from the use of large AI models by billions of users. But researchers at Google found that total energy use from machine-learning AI models accounts for about 15 percent of the company’s total energy use. Bloomberg reports that amount would equal 2.3 terawatt-hours annually — roughly as much electricity used by homes in a city the size of Atlanta in a year.

The lack of transparency from companies behind AI products like Microsoft, Google, and OpenAI means that the total amount of power and emissions involved in AI technology is unknown. For instance, OpenAI has not disclosed what data was fed into this year’s ChatGPT-4 model, how much computing power was used, or how the chatbot was changed. 

“We’re talking about ChatGPT and we know nothing about it,” Sasha Luccioni, a researcher who has studied AI models’ carbon footprints, told Bloomberg. “It could be three raccoons in a trench coat.”

AI fuels climate misinformation online

AI could also fundamentally shift the way we consume — and trust — information online. The U.K. nonprofit Center for Countering Digital Hate tested Google’s Bard chatbot and found it capable of producing harmful and false narratives around topics like COVID-19, racism, and climate change. For instance, Bard told one user, “There is nothing we can do to stop climate change, so there is no point in worrying about it.”

The ability of chatbots to spout misinformation is baked into their design, according to Rolnick. “Large language models are designed to create text that looks good rather than being actually true,” he said. “The goal is to match the style of human language rather than being grounded in facts” — a tendency that “lends itself perfectly to the creation of misinformation.” 

Google, OpenAI, and other large tech companies usually try to address content issues as these models are deployed live. But these efforts often amount to “papered over” solutions, Rolnick said. “Testing their content more deeply, one finds these biases deeply encoded in much more insidious and subtle ways that haven’t been patched by the companies deploying the algorithms,” he said.

Giulio Corsi, a researcher at the U.K.-based Leverhulme Centre for the Future of Intelligence who studies climate misinformation, said an even bigger concern is AI-generated images. Unlike text produced on an individual scale through a chatbot, images can “spread very quickly and break the sense of trust in what we see,” he said. “If people start doubting what they see in a consistent way, I think that’s pretty concerning behavior.”

Climate misinformation existed long before AI tools. But now, groups like the Texas Public Policy Foundation have a new weapon in their arsenal to launch attacks against renewable energy and climate policies — and the fishy whale image indicates that they’re already using it.

A view of the Google office in London in May. Steve Taylor / SOPA Images / LightRocket via Getty Images

AI’s climate impacts depend on who’s using it, and how

Researchers emphasize that AI’s real-world effects aren’t predetermined — they depend on the intentions, and actions, of the people developing and using it. As Corsi puts it, AI can be used “as both a positive and negative force” when it comes to climate change.

For example, AI is already used by climate scientists to further their research. By combing through huge amounts of data, AI can help create climate models, analyze satellite imagery to target deforestation, and forecast weather more accurately. AI systems can also help improve the performance of solar panels, monitor emissions from energy production, and optimize cooling and heating systems, among other applications

At the same time, AI is also used extensively by the oil and gas sector to boost the production of fossil fuels. Despite touting net-zero climate targets, Microsoft, Google, and Amazon have all come under fire for their lucrative cloud computing and AI software contracts with oil and gas companies including ExxonMobil, Schlumberger, Shell, and Chevron. 

A 2020 report by Greenpeace found that these contracts exist at every phase of oil and gas operations. Fossil fuel companies use AI technologies to ingest massive amounts of data to locate oil and gas deposits and create efficiencies across the entire supply chain, from drilling to shipping to storing to refining. AI analytics and modeling could generate up to $425 billion in added revenue for the oil and gas sector between 2016 and 2025, according to the consulting firm Accenture.

AI’s application in the oil and gas sector is “quite unambiguously serving to increase global greenhouse gas emissions by outcompeting low-carbon energy sources,” said Rolnick. 

Google spokesperson Ted Ladd told Grist that while the company still holds active cloud computing contracts with oil and gas companies, Google does not currently build custom AI algorithms to facilitate oil and gas extraction. Amazon spokesperson Scott LaBelle emphasized that Amazon’s AI software contracts with oil and gas companies focus on making “their legacy businesses less carbon intensive,” while Microsoft representative Emma Detwiler told Grist that Microsoft provides advanced software technologies to oil and gas companies that have committed to net-zero emissions targets.  

EU commissioners Margrethe Vestager and Thierry Breton at a press conference on AI and digital technologies in 2020 in Brussels. Thierry Monasse / Getty Images

There are currently no major policies to regulate AI

When it comes to how AI can be used, it’s “the Wild West,” as Corsi put it. The lack of regulation is particularly alarming when you consider the scale at which AI is deployed, he added. Facebook, which uses AI to recommend posts and products, boasts nearly 3 billion users. “There’s nothing that you could do at that scale without any oversight,” Corsi said — except AI. 

In response, advocacy groups such as Public Citizen and the AI Now Institute have called for the tech companies responsible for these AI products to be held accountable for AI’s harms. Rather than relying on the public and policymakers to investigate and find solutions for AI’s harms after the fact, AI Now’s 2023 Landscape report calls for governments to “place the burden on companies to affirmatively demonstrate that they are not doing harm.” Advocates and AI researchers also call for greater transparency and reporting requirements on the design, data use, energy usage, and emissions footprint of AI models.

Meanwhile, policymakers are gradually coming up to speed on AI governance. In mid-June, the European Parliament approved draft rules for the world’s first law to regulate the technology. The upcoming AI Act, which likely won’t be implemented for another two years, will regulate AI technologies according to their level of perceived risk to society. The draft text bans facial recognition technology in public spaces, prohibits generative language models like ChatGPT from using any copyrighted material, and requires AI models to label their content as AI-generated. 

Advocates hope that the upcoming law is only the first step to holding companies accountable for AI’s harms. “These things are causing problems now,” said Rick Claypool, research director for Public Citizen. “And why they’re causing problems now is because of the way they are being used by humans to further human agendas.”