Let’s talk about open source AI and Public Policy

Written by Maria Luciana Axente, Head of AI Public Policy and Ethics, PwC UK

Lately, it feels like you can’t scroll through LinkedIn without stumbling on a fresh debate about open-source AI. Is it a solution to the challenges of AI governance, or a ticking time bomb? You’ll even find dedicated chapters on open-source AI in regulation reports, like the ones in Demos’ Open-Sourcing the AI Revolution, a paper contributed to by PwC UK, which explores the pros and cons of open-source AI in today’s regulatory environment. It’s a hot topic for good reason.

I get it. There’s a sense of optimism in the air—perhaps too much. The idea that anyone, anywhere can access the most advanced AI models has a certain charm. You can picture the dream: start-ups competing against tech giants, academia flourishing with innovation, non-profits wielding AI to tackle climate change. But as we delve deeper into this movement, we face a pivotal question: Can we handle it?

 

Utopia or Dystopia? Can it be both

Imagine a world where AI is open for all. Well, no need to imagine too hard—we’re pretty much there. Meta’s Llama models, for instance, have been downloaded over 400 million times, according to the Financial Times. AI models, open-source or not, are fueling innovation at an unprecedented pace. In theory, this democratization levels the playing field. But there’s a catch: “Technology is neither good nor bad; it’s how we use it.” The reality is, not everyone uses it for good.

During a recent conversation with a developer at an AI governance conference, I was asked: Why wouldn’t we want everyone to benefit from AI? With a grin, I answered: “Because we’ve all seen Jurassic Park.” We know what happens when control goes out the window, and AI is no different. It’s a classic case of “just because we can, doesn’t mean we should.”

 

What is this thing called Open Source AI

One of the most significant recent strides in this field comes from the Open Source Initiative (OSI), which has unveiled its first Open Source AI Definition (OSAID). This groundbreaking framework, developed through extensive collaboration and feedback from global workshops, represents years of effort to provide a clear and practical approach to defining open-source AI.

The OSI—widely acknowledged as a leading authority on open-source standards—insists that open-source AI should come with four essential freedoms:

1. Use for Any Purpose – Individuals can use the system freely without permissions.

2. Study the System – Users must have access to see how the system works and inspect its components.

3. Modify as Needed – People should be able to modify the system to suit their purposes.

4. Share Freely – The modified or original version can be shared freely with others.

These freedoms are vital to ensuring that AI remains transparent and user-driven. According to Carlo Piana, OSI’s board chair, “The co-design process that led to version 1.0 of the Open Source AI Definition was well-developed, thorough, inclusive, and fair.” Piana believes that OSAID meets the standards of open-source defined in OSI’s broader definition, fulfilling these foundational freedoms.

 

The peculiar case of a Llama ( not this jolly fellow)

Source UKposters

Speaking of “open” AI, Meta’s Llama models have been a breath of fresh air for many developers, offering an alternative to the “black box” models from the likes of OpenAI and Google. Yet, the Open Source Initiative (OSI) does not agree Llama is truly “open-source.” According to Stefano Maffulli, head of the Open Source Initiative (OSI), there is concern that the use of the term “open-source” is evolving. In an interview with Financial Times, Maffulli noted that the broader use of the term, as seen with Meta’s Llama models, could create confusion about what “open-source” traditionally means.

While Meta provides developers access to download Llama models, some components, such as the training algorithms, remain closed, which raises questions about transparency. Maffulli highlighted the importance of maintaining clear definitions in the AI space to ensure that openness and trust continue to underpin the principles of open-source AI.

 

The double-edged sword

Open-source AI can accelerate innovation and break down barriers, but there’s a darker side too. The recent Demos report, Open-Sourcing the AI Revolution, highlights this dilemma clearly. As the paper outlines, the open-source movement historically argued that “making the code available makes software transparent and therefore safer.” However, open-source AI presents unique risks. The Demos paper notes that people can use open AI models to create new ones without guardrails—potentially amplifying cybercriminal activity or spreading misinformation.

Take WormGPT, for example—a generative model specifically built to assist cybercriminals. Advocates of open-source AI often point out that even closed systems are vulnerable, but this doesn’t erase the risk. Criminals, too, can leverage open-source models to create havoc.

 

The path to control with regulation

Of course, no discussion on this topic would be complete without addressing the “control” factor. Can we really leave AI governance to the companies creating it? Meta’s case is a prime example of why that’s not always the best idea. Their claim of “committing to open-source AI” rings hollow when the reality of their licensing model limits competition and transparency. As Maffulli stated, Meta risks hampering the development of true, user-driven AI innovation.

It’s clear we need guardrails. Governments will need to step in, as they have done in other industries. As outlined in the Demos paper, there’s a growing consensus that “open-source AI is not without risks.” While companies should drive innovation, policymakers must ensure that AI’s deployment is safe, transparent, and equitable. As the report emphasizes, we may need to rethink how open-source AI fits into regulation frameworks, and how we balance innovation with control.

And that’s really what it comes down to: balance. We can enjoy the benefits of open-source AI while being cautious about how it’s used and who gets to use it. If we want AI to work for everyone—and not just a select few—we must ensure it’s built on foundations of responsibility, fairness, and most importantly, safety.

 

So, where do we go from here?

Well, in true AI fashion, we need a mix of data-driven insights, regulatory foresight, and—let’s not forget—a healthy dose of human common sense. Open-source AI has the potential to be the key to a brighter, more equitable future. But only if we get it right. And that’s a challenge worth tackling. After all, as I always say: “AI is a tool, but it’s our responsibility to decide how we use it.” The first thing first is to conveen experts for a series of pre policy regulations, we have done in the past, it works well for everyone, but most importantly for those who need to take actions.


Read More AI for good

Comments are closed.