Building a competitive, ethical AI economy

Written by Katherine Mayes, Programme Manager at techUK

Earlier this month Sage published its position paper, Building a Competitive, Ethical AI Economy, outlining the key steps for government and businesses to put ethical AI principles into practice to benefit industry, government and society.  The paper was compiled with participation from government representatives and global businesses, including techUK.

The paper outlines actionable insights for business and society to leverage AI-powered technologies in an ethical, trustworthy and sustainable way. According to the paper, industry leaders and government must work closely with AI experts to put ethical principles into practice under four key pillars:

1.    Introducing AI corporate governance and ethical frameworks

  • For business – Develop or revise corporate governance frameworks to include ethical technology policies with top-down accountability measures specific to each organisation’s business model. Include adherence to frameworks as a static agenda item for discussion at board meetings, employee performance reviews and less formal management/staff check-ins to establish accountability expectations at every level.
  • For government – Look at the role of regulators, like the UK’s Financial Reporting Council (FRC), in guiding and assisting specific sectors on ethical best practices implementation. Work with industry AI experts to familiarise regulators with the technology’s technical makeup, potential security risks and real-world applications before launching formal investigation programs. Review the need for enforcing domestic and/or international standards in order to ensure a level playing field.

2.    Demystifying AI and sharing accountability

  • For business – Engage external ethical experts to explore how AI accountability or explainability applies to specific corporate ambitions and customers’ needs. Develop strategies for testing AI prior to deployment – and monitoring once AI is out in the world.
  • For government – Recognise that there needs to be a balance between corporate AI innovation and increased accountability.

3.    Building human trust in corporate AI

  • For business – Make corporate approaches to informing stakeholders about AI and its purpose as transparent as possible. Introduce training and certification programs for partners and employees working with AI to conduct business. Communicate steps taken to test AI for performance flaws and safeguard work done with the technology to potential users.
  • For government – Run government-anchored awareness campaigns to reduce public inhibitions around AI presence in work and everyday life.

4.    Welcoming AI into the workforce

  • For business – Invest in school programs to support community digital education. Empower HR functions with data to map future skills demand. Invest in retraining. Call on fellow businesses and Governments to incorporate AI and data science into staff training throughout ranks.
  • For government – Ensure young people leave education equipped for applying AI and with an understanding of the wider ethical issues. Redirect existing skills investment into staff retraining for jobs that interact significantly with AI and other automated technologies.

Taking practical steps to address the ethical issues posed by AI should be an ongoing priority for government and businesses alike. On Thursday 04 October, DigitalAgenda are hosting their Power & Responsibility Summit, looking at the digital changes taking place across society and the economy; including critical themes including privacy, online safety, trust, developer ethics and fake news. Find out more here.Read the report


This article was originally published here.

 

Comments are closed.