As someone who has spent years building technology that serves real people in real places, I see AI as both a powerful tool and a growing responsibility. Its potential is extraordinary. It can help organisations reach more people, understand communities more deeply and operate with far greater efficiency. But beneath that promise sits an environmental cost that is still overlooked across most industries.
My concern isn’t rooted in pessimism. It comes from proximity. I see how quickly AI adoption is accelerating, and I see how little visibility many organisations have into the emissions behind their digital choices. If we want AI to support a more sustainable future, we need a clearer view of the hidden costs and a more intentional approach to how we deploy it.
AI feels effortless on the surface. One prompt, one answer. Yet every interaction relies on servers running at massive scale. Training a single large model can generate the same carbon output as several cars over their lifetimes. Then we layer daily usage on top, across thousands of tools and millions of queries. The cumulative impact rises fast.
What makes this more challenging is the complexity of AI supply chains. Between cloud providers, model hosts and downstream applications, it is often unclear where energy is used, how efficiently systems operate or whether models are retrained more often than necessary. Without measurement, organisations can increase their carbon footprint significantly without realising it.
I don’t see this as a reason to slow innovation. It is a reason to build with awareness.
One of the most common issues I observe is overuse. When AI becomes the default response to every problem, emissions increase unnecessarily. Many tasks can be handled more efficiently with simpler software or with small domain-specific models that use far less compute.
At Hello Lamp Post, this has shaped our own design philosophy. We use AI where it genuinely improves outcomes, not where it simply feels convenient. Hybrid systems often do the job better and with a fraction of the environmental impact. Asking “Do we actually need AI for this?” is one of the most useful questions any organisation can embed in its workflow.
Another overlooked factor is geography. When data travels across continents, its energy footprint rises. Hosting models and data closer to where they are used reduces unnecessary load and improves performance at the same time.
Power sources matter even more. Infrastructure supported by renewable energy dramatically reduces carbon impact. This is not a theoretical improvement. It is a practical and immediate one. For our own systems, renewable-powered hosting is a baseline requirement rather than an optional feature.
Reducing prompt size also has a surprisingly meaningful effect. Overly long inputs demand more processing power, especially at scale. Smarter routing, vectorisation and more efficient indexing can cut this overhead without reducing quality.
These may sound like small adjustments, but at population level they add up to real reductions.
I am convinced we will see a strong move toward smaller, sector-specific models. Large general models are useful for broad tasks, but they are heavy, expensive and energy-intensive. Specialist models, trained on focused data, are faster, cheaper to run and better suited to real-world operational needs.
For many public-sector and place-based use cases, these dedicated models are not just desirable. They are the future. They deliver more accurate results while keeping energy use under control, and they allow organisations to make decisions with clearer insight into environmental impact.
As AI adoption grows, accountability must grow with it. Organisations will need to assess not only their own practices but those of every supplier in their chain. Carbon reporting, renewable-powered infrastructure and transparent training processes will become standard expectations.
This shift will mirror what happened in cloud computing. For years, providers operated without pressure to disclose environmental impact. Eventually, customers began demanding clarity. Standards emerged. Today, renewable-powered cloud services are a mainstream expectation rather than a niche offer.
AI will follow the same path, but faster.
Despite the challenges, I believe AI can create a net positive environmental impact. When used well, it reduces travel, improves decision-making and expands access to services without physical expansion. In our work with partners, AI has helped eliminate unnecessary journeys, cut the need for printed materials and enable staff to focus their time where it matters most.
It can also amplify public participation. When people have easy access to information and a simple way to share their views, more voices are heard. That leads to better decisions and more sustainable outcomes.
The point is not to avoid AI. It is to use it with intention.
Responsible AI isn’t defined by a single decision. It is the result of many small choices made consistently over time: choosing efficient models, reducing unnecessary prompts, hosting locally, demanding clarity from suppliers and measuring impact with honesty.
If we take these steps now, we avoid far more costly interventions later. We protect climate commitments, we reduce risk and we build systems that align with the world we want to create.
AI can support smarter, greener communities. But only if we design it that way.
To explore these ideas further, I recently discussed them during AI Week 2025. You can watch the full session here. https://aiexpert.digileaders.com/talks/ais-carbon-footprint-how-to-harness-ai-responsibly/