The EU AI Act was approved by the EU Parliament on 21 May 2024 and entered into force in the near future. Overall, we are supportive of the approach that the Act is taking – it is values-focused, principles-based, and a landmark step towards fostering trustworthy AI that is developed and used safely, responsibly, and ethically.
As with a lot of organisations, this recent milestone has given us cause to reflect on what AI means to all of us here at Informed, and to make sure we have a clear direction and strategy for how we will comply with the Act.
Understandably, the EU AI Act and other frameworks like it are focused on understanding and mitigating the risks that surround AI so that our fundamental rights and freedoms are protected. Here at Informed, we’re very mindful of the risks that surround the development and use of AI but, equally, we’re also mindful of the consequences that can come with swinging too far in the other direction and allowing risk aversion to hold back innovation.
Through our lives and work, it’s clear to see that our needs and expectations as citizens and customers are continuing to grow, whilst the human and financial resources needed to meet demand are becoming scarcer and more thinly spread. The data that underpins decision-making continues to grow massively in volume and complexity, making it increasingly difficult for a person to analyse and act. AI is one of the main tools that society has at its disposal for solving these challenges at scale, and so being overly cautious about its use brings its own set of risks.
Given that, we intend to keep a balanced outlook and approach to how we deliver and use AI. As per the principles of the EU AI Act, we’ll take great care to understand potential risks and mitigate these responsibly, but we will also be optimistic and ambitious about the benefits of developing and using AI to solve the challenges mentioned.
This isn’t always an easy balance to strike, but we have lived experience of doing it at scale and in diverse settings. One example of this is our work with NHS England’s Patient Safety Team, who we collaborated with to deliver the award-winning national Learn From Patient Safety Events service. The service uses AI, machine learning and data-driven insights to support patient safety learning and continuous improvement, such as identifying new or under-recognised risks so that action can be taken to keep patients safe. Another example is our work with NatureScot, where we have collaborated to create a national platform that uses AI and Natural Language Processing to enhance decision-making, allowing NatureScot’s experts to allocate more time to priority policy areas related to climate and biodiversity crises. Both examples prove that it is possible to apply AI in complex environments in a safe, controlled and innovative way.
Having worked out where we stand on the risks versus benefits of AI, we wanted to crystalise what this means in practice for how we will use and deliver AI solutions. We did this by developing our AI Charter, which you can find on our website. The Charter describes the ground rules we will follow – with our clients, partners, and within our own teams – to realise the benefits that AI can offer in an ethical, safe, and responsible way.
Developing the Charter has been a valuable exercise, aligning diverse viewpoints and priorities and building a shared understanding of how we will all move forward with AI. If you’re grappling with similar questions, then you might find it valuable to do something similar.
In summary, we think it’s important and valuable for organisations to crystalise a position on what AI means for them, and what this position means in practice day-to-day. As we navigate the evolving landscape of AI regulation, we think it’s important to take a balanced approach that recognises both the potential and the risks associated with AI. By doing so, we can take advantage of the transformative potential that AI offers and ensure it is safe and trustworthy at the same time.
Originally posted here