Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions
By Ben Turner published 2 days ago
Asking AI reasoning models questions in areas such as algebra or philosophy caused carbon dioxide emissions to spike significantly.
The more accurate we try to make AI models, the bigger their carbon footprint with some prompts producing up to 50 times more carbon dioxide emissions than others, a new study has revealed.
Reasoning models, such as Anthropic's Claude, OpenAI's o3 and DeepSeek's R1, are specialized large language models (LLMs) that dedicate more time and computing power to produce more accurate responses than their predecessors.
Yet, aside from some impressive results, these models have been shown to face severe limitations in their ability to crack complex problems. Now, a team of researchers has highlighted another constraint on the models' performance their exorbitant carbon footprint. They published their findings June 19 in the journal Frontiers in Communication.
"The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences in Germany, said in a statement. "We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models."
Snip...
https://www.livescience.com/technology/artificial-intelligence/advanced-ai-reasoning-models-o3-r1-generate-up-to-50-times-more-co2-emissions-than-more-common-llms