Why AI Struggles to Understand Renewable Energy Opposition
Source PublicationSpringer Science and Business Media LLC
Primary AuthorsChaudhuri, Chun, Susskind

Imagine you hire a very eager, very fast intern to read thousands of restaurant reviews. If you ask them, "Did the customer mention the soup?" they get it right every time.
But if you ask, "Was the customer upset about the chef's attitude or just the slow service?" the intern gets confused and starts guessing.
According to preliminary research, this is exactly what happens when scientists use artificial intelligence to study renewable energy opposition.
The pushback problem
To meet climate targets, nations must build wind and solar farms at an incredible speed. Yet, local pushback often slows these projects to a crawl.
Understanding the root causes of this friction is essential for planners. However, reading through thousands of local news articles and forum posts takes years of human effort.
Researchers recently attempted to speed up this process using Large Language Models (LLMs). In a recent early-stage study, scientists tested whether AI could accurately map public sentiment at a massive scale.
Testing AI on renewable energy opposition
The research team fed their AI system online media coverage concerning 5,095 operational wind and solar projects across the United States. Though limited to these specific US installations, they asked the AI to score the projects based on the types of complaints, the drivers of anger, and basic project details.
What the study measured was the AI's accuracy compared to human readers. The early-stage results show a massive split in performance.
When tasked with finding clearly defined, observable facts, the AI was brilliant. It hit over 95 per cent accuracy. But when asked to interpret context or synthesise narratives, its accuracy dropped below 85 per cent.
The digital megaphone effect
The AI analysis did spot some broad trends. Roughly half of the projects analysed had documented opposition in online media. The size of the project and the type of technology clearly affected the level of pushback.
But the researchers noticed a major flaw in the data itself. The demographics of the opposition seemed to reflect who was loudest online, rather than what the whole community actually felt.
This creates a severe bias. If scientists rely entirely on AI scraping the web, they only hear from people with high digital visibility. Quiet concerns from marginalised groups simply vanish from the record.
Slow down to speed up
What does this mean for the future of climate research?
The findings suggest that AI cannot just be plugged in and left to run. Relying on an algorithm to answer complex, value-heavy questions requires massive human oversight. To use these tools responsibly, the researchers suggest several guardrails:
- Invest heavy upfront time to clearly define what the AI should look for.
- Manually validate the AI's results against human reading.
- Acknowledge the bias of digital visibility in online data.
Ultimately, effective use of AI in social science might actually take much longer than anyone anticipated. Algorithms can read the internet, but they still cannot read the room.