Sometimes it feels like I’m stuck in the past. Too often when faced with a new challenge, my first inclination is not to face forwards with an open mind, but to look backwards to try to extract lessons from previous experiences that help me to describe and understand it. And while relying on what’s happened before can be very helpful in many circumstances, it also brings the real danger of being too blinkered, biased, or backward. I can’t work out if my past experience is my greatest asset, or the main anchor that holds me back.
It is a challenge that all of us face as we try to solve new problems, whether it is individuals reliving past glories, or organizations constraining innovation outside of existing cultural norms. We’ve all experienced it in one way or another: “Sorry, that’s not the way we do things round here!”.
Unfortunately, it is also a significant concern when looking to implement and apply AI. It sees the future through the eyes of the past. Despite the futuristic allure of AI, its intrinsic strength lies in the analysis of large amounts of historical data to extrapolate future scenarios. This approach raises important questions: Is AI overly reliant on the past in steering a course through an ever-evolving strategic and operational landscape? And if so, what are the implications for how we use AI to take us forward?
The analytical prowess of AI, rooted in processing extensive historical data, shines a light on hidden trends, correlations, and anomalies that often elude human observation. To illustrate, consider AI’s role in enhancing many different kinds of forecasting capabilities. By scrutinizing past sales patterns, supply chain movements, customer behaviour, and market fluctuations, AI can predict future demand with an unprecedented level of accuracy. Such predictive insights not only optimize inventory management but also facilitate the personalized tailoring of marketing campaigns to individual preferences, build communities around shared products and services, and influence global trends.
Moreover, AI’s ability to streamline operations is exemplified through its analysis of historical performance data. This empowers organizations to identify operational bottlenecks, optimize production processes, and predict equipment failures. The result is a tangible improvement in efficiency and a reduction in downtime, underscoring the transformative impact of AI on industrial operations.
The innovation acceleration facilitated by AI is equally noteworthy. The mining of past research papers, patents, and industry trends enables AI to expedite the discovery of novel ideas, materials, and products. From designing new drugs to finding hidden deposits of raw materials, this newfound agility provides organizations with a competitive edge, illustrating how the well-tuned use of historical data can propel organizations and industries forward.
Yet, as we have seen all too clearly recently, predicting the future is fraught with challenge. While historical trends offer valuable insights, they can be particularly fragile when faced with the unknown. Of course, black swan events, like pandemics or technological breakthroughs, can shatter established patterns. However, often it is the more routine challenges that are a greater threat. Complex systems like platforms, markets, or societies are inherently dynamic, with countless factors interacting in unpredictable ways. As a result, even minor adjustments in starting conditions or small variations in the operating context can lead to wildly divergent outcomes, making precise predictions near impossible. While data is crucial for understanding the past and present, embracing the inherent uncertainty of the future is key to making informed decisions and navigating the uncharted waters that lie ahead.
Consequently, a nuanced understanding of the limitations inherent in AI’s past-dependence in its use of data is essential. A clear example is the potential introduction of data bias. AI algorithms trained on skewed or outdated data risk perpetuating existing biases and inequalities. For instance, a recruitment AI system may be trained on past hiring data that embeds cultural and corporate biases concerning candidates’ background, education, ethnicity, and gender. The risk is that the AI system might inadvertently replicate this bias in future recommendations, exacerbating imbalances within the workforce.
Another significant limitation arises from AI’s propensity to primarily extrapolate from existing patterns, rendering it less adept at predicting disruptive innovations or unforeseen events. A case in point is a large language model, like ChatGPT, trained on historical news articles. Such a model might struggle to accurately predict groundbreaking scientific discoveries or significant political upheavals due to its limited exposure to alternative possibilities beyond historical data.
Furthermore, an overreliance on AI predictions has been seen to foster a false sense of certainty among decision-makers. It is crucial for leaders to remember that predictions, despite their precision, are still probabilistic in nature, necessitating a balanced approach that considers alternative perspectives.
Underlying this challenge is often a poor understanding in leaders and decision makers of the fundamental concepts of AI and data science. Hence, many people beginning to rely on AI systems have little meaningful understanding of what’s inside the “AI black box”. A deeper scrutiny of AI’s use of data for prediction exposes several important principles that must be recognised by anyone involved with the responsible use of AI:
The COVID-19 pandemic serves as an illustrative case study, demonstrating how reliance on pre-pandemic data can lead to misleading predictions. Consider the fragility of AI-supported supply chains as they struggled to cope during the pandemic. Due to drastic swings in production, surges in demand, and re-design of supply chains, AI predictions during that period varied widely from the new business reality. While seemingly logical, both during and post-pandemic, shifts in product production and consumer behaviour frequently rendered such predictions entirely inaccurate.
This scenario underscores several potential pitfalls. Firstly, the occurrence of unforeseen events, such as the pandemic, can significantly impact markets and behaviours. AI models trained on pre-pandemic data lack the context to understand and predict such shifts, highlighting the limitations of past data in foreseeing unprecedented events.
Secondly, the concept of temporal bias must be addressed. Data collected during specific periods may not be representative of long-term trends. Predictions based on data influenced by the pandemic might not hold true in a post-pandemic world, emphasizing the importance of continuously updating and refreshing training data.
Finally, the contrast between static and dynamic environments becomes evident. The world is in a constant state of flux, with AI models rigidly reliant on past data potentially failing to adapt to changing market conditions, consumer preferences, and unforeseen disruptions. Similar to technical debt in software, data debt in AI systems can be equally corrosive.
Overcoming AI data limitation issues is far from easy. To navigate through these intricate challenges, digital leaders must adopt a strategic and proactive stance to data management, including:
To be effective, a deeper understanding of AI’s use of historical data is critical. While looking backwards remains a cornerstone of AI’s predictive capabilities, it should not dictate our vision for the future. By acknowledging and actively addressing the limitations inherent in AI’s reliance on historical data, organizations can unlock its potential and take a more responsible approach to the use of AI to lead them forwards.