The hazards of over-reliance on AI technology

adult female working on computer

Written by Patrick Foster, Ecommerce Consultant at Ecommerce Tips.org

With each passing month, we edge closer to the kind of future that has long been theorised but just as long been through implausible — a future of automation steering almost every element of our daily lives, guiding our decisions and determining the parameters that bind us. Some welcome it, ready to relinquish control to systems incapable of human error, while others fear it.

Considering things on balance, each side has reasonable points to make, though we need to carefully consider those of the latter if we’re to reach a point of comfort with the former. However significant the advantages of AI are (and they are astonishing), the risks of getting the implementation wrong as we cede responsibility are catastrophic.

Here, I’m going to run through some of the major hazards of relying too heavily on AI technology, and consider how we can address them as we continue to push for automation.

We can’t troubleshoot systems we don’t understand

Computer systems may not be able to make human errors, but as the results of human design, they inevitably consist (to some extent) of human errors. Consequently, problems arise, and those problems must be analysed by people to ensure that they’re properly resolved.

In principle, a computer system could serve as a troubleshooter, but could we be certain that such a system contained no bugs? To be sophisticated enough to handle that task, it would need to be massively complex, meaning it would take a huge amount of time to dissect it.

And the more immense and multi-layered AI complexity becomes, the harder it is for any person to comprehend it as a totality. One day, we’re likely to reach the point of self-improving systems becoming near-unrecognisable to our eyes — and then we’ll have no choice but to trust in them or attempt to shut them down.

Our brains, and our ability to cognate, are limited. We can get smarter, and learn new things, and grasp more advanced concepts, but we can’t process data any faster, or deal with large systems without breaking them down into understandable chunks. Unless transhumanism becomes a mainstream reality, this is a problem we cannot avoid.

We have yet to adequately confront ethical concerns

Several cases of disaster in recent years have set back efforts to hasten the advent of self-driving cars and comparable transport automations, and in doing so highlighted how confused we are when trying to allocate culpability for things going wrong in situations governed by AI technology.

When a human driver crashes a car and kills or mortally wounds someone, it incites fury and outrage, but of a familiar kind. Those involved can blame the driver and hold them accountable for their suffering. They can see them charged and punished by the law, and get some measure of contentment from the retribution.

When an algorithm leads to a death, though, whom are we to blame? The anonymous programmer? The physical server? And what happens if the onboard data shows clearly that the death was actually the preferable scenario, and had any other action been taken, several people would have died instead of just one?

We can barely figure out how to handle ethical concerns when people are involved, so it’s hard to see how we could handle a world in which vengeance no longer existed in the same way. Of course, we may have to, sooner or later.

The world of employment is fundamentally delicate

Humans are adaptable when properly pushed, but they can also give up very easily when the circumstances don’t lead them to expect a brighter future. This has been an issue ever since the industrial revolution. If you’ve committed years of your life to mastering a skill, and then a company comes along and reveals a computer system capable of doing it a hundred times more efficiently, it’s understandable to feel hopeless.

And make no mistake: AI is far from done eating into jobs. What happens to the workforce of drivers when self-driving cars overcome the ethical roadblock and become standard? Or to coders when a system capable of replacing a web developer is created? Those of us in creative roles hope that our skills will retain value for longer, but even that is somewhat speculative.

Some may want to simply write this off as an unavoidable consequence of progress, reasoning that the workforce will eventually get used to the new paradigm. Younger generations, savvy with technology and accustomed to flexibility, will flourish — they’ll learn to build successful brands using boundless online options, trade businesses back and forth, or, if needed, flock towards those jobs that AI will create (every system will need oversight, as we established).

However, I contend that they underestimate the danger here. If AI goes too far too quickly, and enough people suddenly find themselves lacking marketable skills, protesting and rioting may just be the beginning. It’s a worrying thought. We need to look as far ahead as we can and know what we’re getting into.

Idle hands often produce dissatisfaction

Let’s suppose that all of the issues we’ve looked at are suitably addressed somehow: we’re able to keep AI systems properly maintained, we learn to deal with ethical concerns, and we automate roles into obsolescence slowly enough that people adapt.

What comes after that? Do we bask in a glorious new utopia of freedom and relaxation? Precedent leaves that notion somewhat questionable. Despite living standards across the world being higher today than ever before, depression is rampant, and we’ve good reason to think that lacking clear purpose in life is a major contributing factor.

It isn’t ideal for people to have their roles in life determined by their circumstances, of course, and I don’t think many would sincerely advocate for a return to times of conscription and highly-restricted lives — but it may be that we’re simply not ready to thrive in a world driven by AI technology.

Given that we evolved as hunter-gatherers, we benefit from routine expectations and being challenged. Left to their own devices with no high-priority drive to work and earn and be productive, some will create their own purpose, but many more will be left unmoored. Is that a society we want to so casually embrace?

In striking a negative tone here, I don’t mean to rail against AI innovations — used well, they can (and will) accomplish wonders. Rather, by frankly addressing the major causes for concern, I hope to encourage everyone to carefully consider them and start coming up with viable solutions.


More thought leadership

Comments are closed.