Building trust is the key to AI-at-scale

Written by Prof. Alan Brown, AI Director, Digital Leaders

AI promises to be a transformative force across many industries, offering immense potential for innovation and growth. However, successfully scaling AI deployments will only be possible if we overcome a major hurdle: Building trust in AI.

In working with several organisations recently, I have seen that placing focus trust is especially important for digital leaders and decision-makers as they adopt AI-at-Scale. What can digital leaders do to build trust in AI?

 

AI’s original sin

We are beginning to recognize that adopting AI means putting a great deal of trust in the AI tools and their vendors. Understanding how data is acquired and used is central to the ongoing debate about the appropriate adoption of AI. While there are many elements to this, I have found that Tim O’Reilly’s recent work provides a succinct summary of these concerns around what some have called “AI’s original sin” – the data used in training AI models.

In his article, O’Reilly highlights four key points that are fundamental issues for digital leaders regarding the use of data for training AI models and their implications for AI:

  1.  Copyright violations and ethical boundaries: The controversy over tech giants like OpenAI and Google using transcriptions of YouTube videos as training data, despite potential copyright violations, underscores the tension between AI development and existing copyright laws. This raises ethical and legal questions that can erode trust in AI technologies if not transparently addressed and regulated.
  2. Political economy of AI-generated content: A key issue is the way companies pay for data and content. O’Reilly emphasizes shifting the focus from legal battles over copyright infringement to understanding the political economy of AI-generated content. Creating new business models and institutions that fairly allocate value among all parties involved in the AI supply chain can reinforce trust in AI. Transparent and equitable systems for value distribution ensure that stakeholders see their contributions acknowledged and compensated fairly.
  3. Impact on content creators: O’Reilly argues that comparing generative AI tools to search engines is a false comparison. While search engines drive traffic to the original source provider, it is found that AI-generated summaries might reduce traffic to original content sources, potentially harming content creators. To maintain trust in AI, systems must be developed to ensure content creators benefit from AI’s use of their work, similar to how search engines drive traffic to websites. Ensuring AI models generate outputs that respect and credit original sources is crucial for building a sustainable and trusted AI ecosystem.
  4. Participatory AI architecture: One approach to build trust may be through open source approaches. O’Reilly advocates for a participatory architecture for AI, akin to the World Wide Web, where AI systems are built on open protocols that respect content ownership and copyright. This would allow content creators to control how their work is used and monetized, fostering a collaborative environment. Trust in AI would be significantly enhanced if users and creators are confident that their rights are protected and that they are active participants in the AI-driven digital economy.

From RAG to riches

Yet, beyond the training of AI tools, when using generative AI tools such as ChatGPT we face important questions about the accuracy and relevance of AI-generated content. Ensuring that AI-generated outputs are grounded in verifiable sources is essential. Retrieval-Augmented Generation (RAG) is a promising approach that addresses this challenge. Understanding several aspects of RAG are critical for building trust:

  1. The mechanics of RAG: RAG operates in two main stages. First, it employs a retrieval mechanism to search a vast collection of documents for relevant information related to a given query. This ensures that the AI has access to a broad range of factual data and context. Second, the generation component uses this retrieved information as a foundation to craft its response. By anchoring its output in specific, verifiable sources, RAG ensures that the generated content is both contextually appropriate and factually accurate.
  2. Grounding responses in source material: One of RAG’s standout features is its ability to ground responses in well-defined source materials. During the retrieval phase, the model scours databases, documents, and other repositories to gather pertinent information. This retrieved content is then directly referenced or integrated into the generated response, providing a clear lineage back to the original sources. This traceability not only enhances the credibility of the AI’s output but also allows users to verify the information, fostering greater trust and transparency.
  3. Mitigating hallucination in generative AI: Hallucination in generative AI occurs when the model produces content that, while syntactically correct, lacks factual accuracy or grounding in reality. This risk is always an issue, but it is particularly problematic in applications requiring high reliability, such as medical advice, legal information, or financial analysis. RAG addresses this issue by ensuring that the generative process is informed and constrained by real data retrieved during the initial phase. By doing so, RAG significantly reduces the likelihood of hallucination, as the model’s outputs are directly tied to verifiable sources.

In essence, RAG shifts generative AI from one where plausible-sounding fabrications can easily occur to one where responses are deeply rooted in factual data. It is an important step forward in generative AI in many situations, and means more reliable, transparent, and trustworthy AI solutions, paving the way from potential pitfalls to genuine riches in AI capabilities.

 

Leading AI-at-scale

Building trust is a critical aspect for the success of AI-at-Scale. It ensures that all stakeholders are confident in the reliability, fairness, and security of AI systems. Without trust, there is a significant risk of resistance to adoption, underutilization, and even backlash against AI initiatives. Trust in AI encompasses various dimensions, including transparency in how AI models make decisions, accountability for the outcomes produced by AI, and assurances of data privacy and ethical considerations. Establishing trust helps in mitigating fears about job displacement, bias, and loss of control, which are common concerns associated with large-scale AI deployments. It also fosters a collaborative environment where users feel their feedback is valued and incorporated, leading to continuous improvement and refinement of AI systems.

The role of a digital leader in building this trust is pivotal. Digital leaders are responsible for setting the vision and strategy for AI adoption, ensuring that ethical guidelines and best practices are followed. They must communicate clearly and effectively about the benefits and limitations of AI, promoting a culture of transparency and openness. This includes advocating for robust data governance frameworks, investing in explainable AI technologies, and ensuring rigorous testing and validation processes. Moreover, digital leaders play a crucial role in building interdisciplinary teams that bring diverse perspectives to the table, thus enhancing the robustness and fairness of AI systems. By leading by example and fostering an environment of ethical innovation, digital leaders can build and sustain the trust necessary for the successful scaling of AI initiatives.

Digital leaders must prioritise a comprehensive approach to AI that addresses risk, builds trust, and unlocks value. In practice this means:

  • Integrate responsible AI practices throughout the development lifecycle.
  • Foster transparency and explainability in AI decision-making.
  • Develop fair and equitable value distribution models within the AI ecosystem.
  • Leverage RAG-like approaches to ensure the factual grounding of AI outputs.
  • Focus on deriving value-in-use by applying AI to generate tangible business outcomes.

By embracing these principles, digital leaders can deliver AI-at-Scale, fostering innovation, building trust, and driving sustainable growth in the digital landscape.


Originally posted here

Read More AI & Data

Comments are closed.