Generative AI: Understand the challenges to realise the opportunities

Ai,Chat,For,Better,Business:,Next-gen,Technology.

Written by Marion Eigner, Sr. AI Strategist , & Neil Mackin, Principal Machine Learning Strategist, AWS

Generative artificial intelligence (AI) allows anyone to leverage machine learning (ML) capabilities using natural language, and it is extremely intuitive to use. When users are able to search, analyze and draw conclusions in seconds—from extensive information that exists across their organization or the internet—they can make more informed decisions at speed. This can help them answer customer queries efficiently, pinpoint significant changes to contracts and assess risks such as fraud more accurately. Organizations can make more effective use of resources and provide better services by gaining useful insights, such as peak use patterns or the likelihood of good outcomes in different scenarios.

 

What’s different about generative AI?

Generative AI models are trained on a large volume of datasets, which gives them the ability to generate answers to a range of questions and summarize findings in a meaningful way for the user. Common use cases in public sector could be determining the best way to reduce Friday afternoon congestion, or how to manage building utilities more efficiently.

To suggest answers, generative AI systems can combine and cross-analyse a diverse range of data in milliseconds to produce a spoken, graphical or easy-to-understand written summary.

 

What are the limitations or risks of generative AI?

Generative AI models are as reliable as the data they’re trained on and can access. There is a risk of hallucination, which is when the models make something up that may sound plausible and factual but which may not be correct. Anyone who bases decisions and actions on the results of an AI-based query needs to be able to stand by that choice and articulate how it was reached, to avoid unfair targeting or other forms of bias, resource waste or other questionable decisions.

 

How can organizations mitigate those risks?

Any organizations or teams that use generative AI to make decisions or prioritize actions, must build responsible AI systems that are fair, explainable, robust, secure, transparent and that safeguard privacy. Good governance is fundamental for responsible systems. It’s important to be able to justify how these process-support systems arrived at choices.

Organizations need to design and use a proven, well-architected AI framework and operating model to provide for continuous monitoring of the system in use. There has to be full awareness of potential issues and what’s needed to mitigate them. Those issues could involve limitations with the data (its quality, level of standardization, currency and completeness) and any risk of bias, data-protection breaches, or other regulatory or legal infringement.

Systems must be transparent: if someone challenges a decision supported by the AI system, they can track the reasoning behind it. Examples of this could be citing specific sources used in summarisation or tracking the customer data that was used in any ML models.

For a deeper dive, watch this four-part AWS Institute Masterclass series on AI/ML:


Read More AI & Data

Comments are closed.