To pause or to push on: The AI dilemma that will shape our digital future

Chatgpt,Chat,With,Ai,,Artificial,Intelligence.,Man,Using,Technology,Smart

Written by Professor Alan Brown, Professor in Digital Economy, Exeter Business School

Sometimes it feels like the world turns very slowly. Then, suddenly there is a major shift that has immediate impact and changes our view of the world in fundamental ways. For many people, that is what we have experienced over the past few months.

The release of Open AI’s ChatGPT in November 2022, swiftly followed by Google’s Bard and others, a distant view of an AI-driven world suddenly seems much, much closer. Based on Large Language Models (LLMs) such as GPT-4 with billions of parameters, these readily accessible tools have sparked a flurry of applications that make use of neural networks to generate text and images for a wide set of situations. Supported by an easy to use interface, well-crafted APIs, and a low (or free) cost model, it is not surprising that they attracted a million users in only 5 days. Consequently, it is impossible to visit a news website, open a journal, or attend a conference without hearing about yet another way that ChatGPT will change the way we work today. While only a few months old, a Statistica survey estimates that over 40% of the adult US population is already aware of ChatGPT.

But perhaps more importantly, it feels like we’ve reached a critical point in the digital transformation journey being pursued by many organizations. Beyond the usual focus on introducing new technology to digitize current practice, the discussions have turned away from a scorecard of applications that will be disrupted by this new wave of AI-based solution, and toward a deeper conversation on the implications and impact this will have on how we see our digital future.

As these discussions develop, questions are coming to the surface that challenge the current readiness of governance and legal frameworks for AI, dispute the dominant role of Big Tech in controlling key elements of AI, and question the breakneck pace at which AI is disrupting business and society. We are undoubtedly at a crucial point were addressing concerns about “how” to deliver AI are being subsumed by the need to ask “why” and “when”.

 

Slow, slow, quick, quick, slow

Nowhere is this seen more clearly than in the recent calls for a slowdown in the application of AI to allow time for reflection. In an open letter published at the end of March 2023 and signed by over 25,000 people (including several big names such as Elon Musk and Steve Wozniak), they requested “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. They decry the lack of planning and management behind the launch of products such as ChatGPT, and declare that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”.

What is basis for this request? Is it desirable or achievable? In essence, while some people welcome the recent advances in AI as “the end of the beginning” phase of digital transformation, those behind the call for an AI pause are more inclined to highlight that it may well be “the beginning of the end” for humanity without appropriate guardrails in place. They worry that we are entering a time when the ideals and safeguards for AI are being trampled under a headlong rush toward exploitation of human rights, excessive commercialization, and the realization of what Shoshana Zuboff refers to as “surveillance capitalism”.

The case for and against this slowdown has become an important litmus test for expanding our understanding of digital transformation and its importance to our future. Hence, it is essential to spend a moment to review the main arguments on each side of this debate.

 

Why we need to pause

TL;DR – Recent AI breakthroughs have overwhelmed organizations and institutions. Fair and equitable use of AI should be seen as a right available to all. Without a pause to reflect, we are exposed to fundamental weaknesses in governance, legal, and ethical frameworks that must be addressed now.

Perhaps just as significantly as the hopes it raises, the announcement of ChatGPT reminded many people that the rapid availability of such tools will forcing organizations and individuals to address challenges that they may well be ill prepared to address.  Beyond automation and intelligent decision making, LLM-based AI systems are capable of generating vast amounts of information that is not only indistinguishable from human-generated materials, it is also intended to fool people that it comes from a human source. Furthermore, the intelligence used by such systems is limited in scope, often unverified, and subject to manipulation.

Consider, for example, the implications of a sophisticated AI tool that has no concept of right or wrong. Ask it a question and it responds with an answer that is plausible and believable. However, it may also be incomplete, misleading, or simply false. Those already making use of ChatGPT report that its responses are “dangerously creative”. That is, its creativity knows no bounds. It certainly has no limits on whether its answers are true or false.

This can have disturbing results. Cassie Kozyrkov, Chief Decision Scientist at Google, calls ChatGPT “the ultimate bullshitter”. It provides seemingly correct answers to anything and everything.  But without a filter on what it says, and incapable of determining what is true and what is not. It is dangerous precisely because it has no interest in ensuring the validity of its responses.

Furthermore, ChatGPT is widely available at zero cost. This makes it very attractive across many domains. So much so that it had over one million users sign up in less than a week. Its potential uses appear to be never ending. But they also bring with them some troubling questions.

Imagine the implications if every student writing an essay can use ChatGPT to generate the text. Every company producing software can deploy ChatGPT to create its code. Every social media channel is clogged with responses created by ChatGPT. And so on. What will this do to many of our knowledge-based professions? What are the implications for intellectually property and liability in a world where we cannot distinguish how information is generated? How will we evaluate the value and validity of AI generated responses? Will wide availability of AI-generated responses destabilize many of our existing systems? These and many other questions are left hanging in the air.

It is with this in mind that Paul Kedrosky, an economist and MIT fellow, refers to ChatGPT as “a virus that has been released into the wild”. He believes that most organizations are completely unprepared for the impact of ChatGPT. He sees its broad release without restrictions as reckless and has opened a pandoras box that should not have been opened. With the release of ChatGPT we are now beginning to realize just how much that is still to be debated about the future of our digital world.

The latest 2023 AI Index from Stanford University takes this argument one step further. It raises the concern that decisions about how to deploy the latest AI technology and to mitigate are in the hands of a few Big Tech companies. Yet, as their influence grows, they have been seen to cut their AI safety and ethics teams. Several leading figures in AI have highlighted the challenges that this brings to AI’s future and have called for more focus and investment to govern the use of AI.

In such circumstances, those requesting a pause see the unmanaged release of more and more sophisticated AI systems as irresponsible at best, and dangerous at worst. Cooperative agreements and increased focus are required to reduce potential AI harms that are already having significant negative impacts, but may well seen be out of control.

 

Why we need to press on

TL;DR – The AI genie is out of the bottle and no-one can put it back. To attempt to do so would not only delay technological advances that could be of broad benefit, it will also harm the geo-political balance among countries and institutions with very different visions of our digital future.

In some people’s eyes, we’re in the middle of a global AI race for supremacy, and at the moment China may well be in a dominant position. At least two elements appear to be critical: The availability of large quantities of high speed processors (chips); and the building of high quality large data sets and models to train AI.

As the world becomes more deeply engaged in the adoption of digital technologies, attention has turned to who produces the most advanced chips at the core of the systems that now are essential to how we function and essential to AI advances in commerce, infrastructure, defence, government, and so on. Naturally, the super powers of the USA and China are at the forefront of these concerns and have placed technology investments at the top of their agendas. So much so that President Xi Jinping recently declared that “Technological innovation has become the main battleground of the global playing field, and competition for tech dominance will grow unprecedentedly fierce.” This prediction is undoubtedly playing out today.

In addition, it is widely reported that China is applying AI technology very broadly across multiple domains in a race to reboot its economy and control its more than 1.5 billion citizens. Perhaps most prominent has been the use of AI to power widescale surveillance within and outside of China. Using a variety of AI tools and techniques, Chinese companies and government agencies are collecting information, sharing data across different organizations, and identifying individuals in different contexts. While much of the focus for the use of these approaches is internal to China to support its goal of maintaining governance over its citizens, the adaptation of these skills for commercial and political benefit has been highlighted in several reports.

However, China is not alone in understanding the importance of leadership in AI and more broadly in controlling our digital future. The struggle for dominance of our digital infrastructure is a theme at the heart of a recent book by Dame Wendy Hall and Keiron O’Hara. They describe the developments in AI and at the core of the internet as alternative visions of a future digital society embodied in the values, structures, and operating models we see in four distinct versions of our digital future:

  • The Silicon Valley Open Internet is the original concept designed by engineers and scientists as a digital world that was free for all, supporting sharing of knowledge, providing open access, and committed to net neutrality.
  • The Brussels Bourgeois Internet is the libertarian view of a managed society that is governed by regulations to ensure fair play and to make sure everyone follows the rules to eliminate bias.
  • The DC Commercial Internet is the market-driven infrastructure that views property rights and commercial interests to be fundamental to encourage competition for driving rapid technology development and innovation.
  • The Beijing Paternal Internet is a controlled environment where the broader interests of the state determine what is accessible and available to citizens to contain unacceptable behaviour.

What we witness today is a geopolitical struggle for our digital future as these four visions jostle for dominance. The past few weeks has highlighted that AI supremacy may well be the key. As Hall and O’Hara describe it, the importance of staying ahead in their understanding and use of digital solutions transcends concerns about technology and physical devices. What is at play here is the role of digitally-powered infrastructure in determining core aspects of freedom, innovation, security, and human rights. Understanding key elements of this digital infrastructure is essential to appreciate why the battle for control of the Internet is at the heart of today’s political discussions. How this is resolved in the coming years will have implications for all of us.

In essence, those rejecting a call for a pause see developments in AI as essential steps in the digital transformation that is affecting all of society. Apocalyptic visions of the future are overblown and unhelpful. Furthermore, organizations such as the US military establishment view a call for an AI pause as a “well-meaning but futile” attempt to interfere in an “AI arms race” that will affect us all.

 

Between a rock and a hard place

Recent advances in AI have created a great deal of excitement and woken many people to the opportunities it brings across many domains. However, the speed of its adoption is also a cause for concern. A recent call for a 6-month pause on more advanced AI systems development has sparked a great deal of debate. Are we running ahead too fast in AI? Or should we accept that moving forward is essential when AI is such a key part of how we will create our digital world? Understanding both sides of this debate is critical. Spend some time to consider the key arguments. The outcome may well play an important part in everyone’s future.


Originally posted here

Read More Emerging Tech

Comments are closed.