If the industrial revolution of the 18th century was the transition to new manufacturing processes, there’s no doubt that the 21st century will be looked back upon as the technical revolution.
There seems to be ample emerging evidence that this new technical revolution, of data scraping and AI, will cause huge upheavals in a time of significant political, and economic turmoil.
That is not, as often reported by the media, because of the technology itself, but because the people and the companies who wield it may not be thinking of the broader societal implications.
I’m less concerned with the industries ‘consuming’ AI to process, automate and make efficiencies for audiences – banks, healthcare providers and more. But I am worried about the major suppliers of AI such as Google, Amazon, Facebook, Microsoft, IBM and Baidu.
Many of these companies already live under the delusion of indispensability and are creating economic havoc in their fight to build (and buy) strong AI.
I don’t believe these companies will give birth to the seemingly inexorable march of evil robots, or the singularity. I’m suggesting that significant problems will arise from the monopoly of a few global technology giants hoovering up talent, collecting our data, and buying new AI startups in order to own and harness the power of AI, for their growth and not the good of humanity.
When there is only a handful of companies that control the majority of the strong AI, it will need to be an imperative that they are forced to behave transparently, and ethically to prevent the manipulation and corruption of the masses. Will the leaders of those companies take an ethical stance for the greater good of all, or bend to capitalist whims and shareholder pressure?
Global governments could regulate them, and strong AI could be limited, curtailed, restricted, controlled or even stopped. However it is unlikely they will do that – and many of those governments also prefer to wield this power, so this is unlikely to happen.
A better alternative would be for the public to rise and choose who they cede power to. If they can see how the data supplied to companies is being used to fuel the AI, then they would think twice about giving their data too.
A global charter for AI could, in theory, create transparency, but only if companies recognise it, and the public is educated enough to understand the threats and get behind it.
If companies prove that they are ethical by design, we could be safe. But if they are not, alternative companies will be empowered to rise as better alternatives.
I see the characteristics of an ethical technology company as follows:
It’s a utopian view. But not unfeasible.
Let me leave you with this exciting thought. Just as George Stephenson developed the first steam-powered locomotive in 1814, which led to the rise of many machines that changed the world for good, perhaps part of the public demand could be to challenge technology companies to solve the fundamental humanitarian issues that still plague the world today?
What if we, the people, choose to use a product or service, and seed tech companies with our data in exchange for conscious capitalism, in much the same way that the Fairtrade stamp allows people to choose which coffee they consume.
What if AI was used to increase efficiency and augmentation for say, charities, which in turn could help vulnerable people looking for human support in underfunded spaces? Think of the change.
I for one would be more willing to accept that a company held so much computational power if I could also see that they were using the same technology to make the future for my children more balanced, more fair, better supported, and free from harm.
Pete is a speaker at DigitalAgenda’s Power & Responsibility Summit, taking place at London’s British Library on Thursday 4 October. More information including how to secure your ticket.