How trauma-informed is your chatbot?
October 2023
A few months ago, I was privileged to take part in the Digital Leaders’ 18th National Digital Conference, which explored the opportunities, benefits, risks, and societal impact of AI. It was fascinating to hear the different perspectives that were discussed during the day and also the common themes that emerged.
One theme that particularly stood out was the importance that everyone placed on developing and using AI responsibly and ethically. Society is becoming increasingly aware and questioning of AI and so, rightly, trust and confidence will need to be earned by demonstrating that AI is being developed and used with people’s best interests in mind.
Here at Informed, AI is playing an increasingly significant role in the digital transformation programmes that we deliver for our clients, and in the solutions we provide to our international customer and partner community. We want the solutions we deliver to have a positive impact, and so it’s hugely important to us that we develop and use AI responsibly and ethically. Taking part in the conference made me reflect on how we approach AI assurance, and I wanted to share some of the principles and practices that we have found make a noticeable difference.
Over the last few years, information assurance has become a more integral part of every organisation. All organisations in the UK have an obligation to protect data in line with GDPR but, for most organisations, data protection is just one information assurance function that sits alongside others such as information security and cyber security. AI is data driven, and so AI assurance has a tight relationship with these other information assurance functions.
Whilst AI assurance, data protection, information security, and cyber security are complementary and inter-related, the level of collaboration between specialists in each of these assurance functions is often limited. For example, it isn’t often that we see data scientists, data protection specialists and information security specialists sitting down together to co-review a Data Protection Impact Assessment for a new AI based service, or to brainstorm the organisational and technical measures that will help to make an AI solution safe, secure, transparent, and fair by design. This sort of siloed working isn’t uncommon, but it is a missed opportunity for collaboration that risks creating poorer outcomes for AI assurance.
AI assurance, data protection, information security, and cyber security may be different and very specialised disciplines, but they all share a common outcome – to create trust and confidence by assuring that information is being managed responsibly, ethically and legally. Given that shared outcome, organisations should reflect on their operating model for information assurance and, if they need to, make changes that require close collaboration between the different functions. Close collaboration leads to a more complete and cohesive understanding of risks and opportunities that is greater than the sum of its parts. A more complete and cohesive understanding of risks and opportunities leads to more effective actions. More effective actions will lead to more assured AI and greater levels of trust and confidence.
Improving collaboration between assurance functions might sound easier said than done, but we have seen it done simply and well. The best examples are where assurance functions have adopted ways of working that you would typically find in an agile product team. The Plan-Do-Check-Act lifecycle that is a staple of many ISO standards maps closely to the Scrum sprint planning, delivery and review/retrospective framework, and we have seen assurance functions use Scrum very successfully as a methodology for running multi-disciplined teams who work collaboratively to shape and agree shared assurance goals and deliver a Backlog of work that achieves these.
Security by design, privacy by design and data protection by design and default are concepts that we’re all familiar with and subscribe to. These concepts say that security, privacy and data protection considerations should be ‘baked in’ to everyday working practices so that they are assured as a matter of course throughout the delivery lifecycle, rather than every so often. Applying the same principle to AI assurance will help to ensure that AI is safe and ethical by design and has people’s best interests in mind.
The majority of digital transformation programmes involve the delivery of new products, services and capabilities using agile methodologies based on frameworks like Scrum, Nexus and SAFe. These methodologies involve muti-disciplined teams of User Researchers, Service Designers, Architects, Data Scientists and Developers delivering products and services in a user-centred and iterative way. Teams frequently inspect and adapt what they are delivering to assure that user needs are being met, quality is high, and risks are being mitigated. This ‘baked in’ focus on user needs, quality and risk means that agile delivery methodologies can be adapted to embed AI assurance techniques with relatively little effort.
Here is one simple example of how we have embedded AI assurance techniques into a two-week Discovery Sprint where the goal is to understand user needs for a new digital service that incorporates AI:
These are all simple things but making them an embedded part of your delivery method has significant benefits. The overall approach allows organisations to balance agility and innovation with control, which is in-keeping with the spirit the pro-innovation approach to AI regulation and assurance set out in the recent UK Government white paper. The frequency of inspection and adaptation reduces the likelihood of more insidious risks, such as bias in data and models, creeping in unnoticed. There are regular forums for involving assurance specialists in delivery and for different assurance functions to work shoulder-to-shoulder. It is more straightforward to quickly reconcile different viewpoints that team members might have, such as how to balance user needs identified through research with compliance obligations identified by assurance specialists. It is more straightforward to adapt AI assurance techniques (such as those set out in the CDEI portfolio of AI assurance techniques) as new needs, standards and guidance emerge.
AI assurance is closely inter-twined with other information assurance functions and should be approached with a ‘by design’ mindset. AI, data protection, information security, and cyber security assurance functions should collaborate closely, and AI assurance techniques should be baked in your delivery approach. Agile delivery frameworks like Scrum can be readily adapted to allow this and, by doing so, AI assurance becomes an everyday team sport. Ultimately, that can only lead to higher levels of trust and confidence that AI is being developed and used with people’s best interests in mind.
Originally posted here