AI snake oil
December 2024
In recent years, AI has faced mounting criticism regarding its role in perpetuating bias, ethical concerns, discrimination, and privacy violations. However, I find that this narrative too often misplaces responsibility and obscures a crucial truth: AI systems are not creating these problems but rather exposing deep-seated issues that have pervaded our digital landscape for decades.
There is no doubt that the rapid advance in AI technology and use is placing pressure on our legal, moral, political, and societal norms. A brief review of websites such as the AIAARC’s repository detailing incidents and controversies driven by AI, algorithms and automation reveals plenty of examples that raise concern about the directions we’re taking with AI and how it is being exploited by Big Tech, governments, and many other organizations.
Yet, as we deploy AI at scale, we’re finding that AI acts as a mirror, reflecting the systemic challenges that have long existed in our approach to technology development and deployment. And we don’t always like what we see. To understand the roots of this dilemma, consider the challenges faced by AI in 4 areas: Bias, ethics, discrimination, and privacy.
The issue of AI bias has garnered significant attention, with numerous cases of systems producing biased outcomes. However, these biases primarily stem from historical data and inappropriate data capture practices rather than inherent the AI systems themselves. Consider the widely-discussed case of Amazon’s experimental hiring algorithm, which exhibited bias against women. The system learned from historical hiring data spanning a decade – data that reflected the tech industry’s male-dominated historical trends and misguided recruitment approaches. The AI didn’t create this bias; it simply exposed the systematic underrepresentation of women in tech hiring practices.
Such revelations have forced us to confront uncomfortable truths about our data collection and utilization practices. For decades, organizations have amassed vast data repositories without adequate consideration for representativeness or inherent biases. AI systems, by learning from this data, make these pre-existing biases more visible and quantifiable. This transparency, while uncomfortable, provides an unprecedented opportunity to address these systematic issues at their source.
The ethical concerns surrounding AI often centre on decision-making processes and their implications. However, these ethical dilemmas are not new – they’re pre-existing challenges made more apparent by AI’s scale and speed. Take the example of autonomous vehicles and their decision-making in potential accident scenarios. While much attention focuses on how AI should make these decisions, the underlying ethical dilemmas have existed since the advent of automotive transportation. Human drivers have always faced split-second moral choices; AI systems simply force us to codify and examine these decisions explicitly.
Throughout this time, digital technologies have operated under ambiguous ethical frameworks, with organizations needing to balance a variety of factors related to growth and efficiency while addressing these moral considerations. Such concerns impact institutions and individuals in subtle ways, as highlighted in efforts such as the MIT Moral Machine platform, derived from the earlier trolley experiment. Both were designed to explore how people make moral decisions in difficult situations. These experiments have revealed that people’s moral judgments are influenced by a variety of factors, including the number of people involved, their age, and their perceived value to society.
The ethics of automated decision-making in ambiguous situations has long been a subject of study. However, recent advancements in AI have intensified the need for clear ethical guidelines, forcing organizations to explicitly articulate and justify their moral positions.
When AI systems produce discriminatory outcomes, they’re typically reflecting and amplifying existing societal patterns. The well-documented recent case of credit scoring algorithms showing racial disparities mirrors decades-old patterns in financial services. These algorithms learned from historical lending decisions that contained inherent socioeconomic and racial biases. The AI system adopted and perpetuated these discriminatory practices. By being trained on this flawed data, the algorithms encoded existing discriminatory practices that have persisted in financial services for many years.
In practice, it has been found that AI systems, particularly large language models (LLMs), reliance on flawed data can do more than reinforce biases present in the data itself. The “black box” nature of these models adds further complication to this challenging area, making it difficult to understand or explain the decision making process, and hence to address the underlying biases.
For those affected, the implications of AI bias to propagate discrimination are far-reaching and can have significant consequences. AI algorithms can amplify existing societal biases, leading to discriminatory outcomes in areas such as lending, healthcare, and criminal justice. The legal community is particularly concerned about the potential for AI to perpetuate historical injustices and undermine the principles of fairness and equity.
This pattern repeats across various sectors where AI is deployed. The technology serves as a powerful lens, magnifying discriminatory practices that have been embedded in our digital systems and processes since their inception. The transparency provided by AI systems offers an unprecedented opportunity to identify and address these systemic issues.
In a similar way, privacy concerns attributed to AI are often extensions of existing digital privacy issues rather than novel problems. The ability to process and analyze vast amounts of personal data didn’t begin with AI – it began with the digital revolution and the rise of social media platforms, online advertising, and data brokers. Through its rapid evolution, AI has shone a light on weak data management and data governance processes to make the implications of these practices more evident and immediate.
As a result, there is no doubt that AI systems pose significant privacy risks if they enable wide-scale collection and use of personal data without explicit consent. Unfortunately, such systems are already being used to enable mass surveillance and facilitate identity theft. The lack of transparency in AI algorithms and the difficulty of controlling personal data further exacerbate these issues. To gain some control, a combination of regulatory measures, technological solutions, and collective action must be applied.
Consider facial recognition technology, which has sparked intense privacy debates. The underlying privacy issue isn’t so much the increasing quality of AI’s recognition capability but rather the decades-long practice of ubiquitous surveillance and data collection. The same privacy concerns exist with traditional CCTV systems; AI has simply made the potential for privacy violations more apparent and prevalent as the technologies become more efficient, effective, and affordable.
Nevertheless, despite their historical roots we have a critical responsibility to ensure we do all that we can to address these concerns. Yet, the relationship between AI and societal challenges presents a nuanced dynamic that requires careful consideration. While it would be misguided to cast AI as the originator of long-standing issues around bias, ethics, and privacy concerns, we must recognize its potential to serve as a powerful amplifier of these pre-existing problems. These challenges, deeply rooted in human history and social structures, predate the emergence of AI technology but cannot be allowed to be reinforced and expanded by AI’s rapid adoption.
Unfortunately, in some areas the recent rise of AI has taken us beyond the tipping point. The unprecedented scale and speed at which AI systems are being deployed across sectors raises legitimate concerns about their capacity to magnify existing inequities and ethical dilemmas. Similar to the way social media platforms accelerated the spread of misinformation beyond traditional channels, AI systems without careful controls can enhance and perpetuate societal biases through their inappropriate training data and ineffective deployment patterns. AI technology’s ability to process and act upon vast amounts of information at superhuman speeds means that any embedded biases or ethical oversights can be propagated far more rapidly and extensively than ever before.
This reality demands a proactive and vigilant approach to AI governance and oversight. Rather than viewing AI as the source of these challenges, we must focus on ensuring that its implementation doesn’t create a cascade effect that overwhelms our existing social, legal, and ethical frameworks. This requires careful consideration of AI system design, robust testing for potential biases, and the establishment of clear governance structures that can evolve alongside the technology. The goal is not to impede AI’s advancement but to ensure its development aligns with our collective values and contributes to reducing, rather than amplifying, societal inequities.
The challenges we face with AI are not new – they are long-standing issues in our digital practices brought into sharper focus through the power and ubiquity of AI. By understanding AI as a mirror rather than a source of these problems, leaders can better address the root causes of bias, ethical concerns, discrimination, and privacy violations. This shift in perspective enables more effective solutions that address fundamental issues rather than merely treating symptoms.
As we continue to integrate AI into our organizations and society, we have an unprecedented opportunity to address these systemic challenges. The solution lies not in blaming AI but in using its revelatory power to build more equitable, ethical, and privacy-respecting digital systems. The mirror that AI holds up to our practices may show uncomfortable reflections, but it also lights the path toward meaningful improvement in our digital future.
Originally posted here
Read More Data & Decision Making