Like all cutting-edge, high-growth industries, artificial intelligence (AI) is not without its challenges. However, I believe the most widespread is the challenge posed by gender inequality.
A report published by the AI Now Institute in April 2019 found that only 18% of authors at leading AI conferences are women and more than 80% of professors in the field are men. At market-leading companies, the situation is even more confronting. Only 15% of AI researchers at Facebook and 10% of AI researchers at Google are women.
I believe this ‘diversity disaster’ – as the report calls it – has the potential to fundamentally limit the industry’s potential for good. This is because AI systems are at risk of inadvertently replicating the biases of their creators. If development teams share the same, male perspective, then the tools they create will at best, be limited and at worst, extend gender discrimination into the digital arena.
As someone who’s been in the AI industry for over three decades, I know that the key to being a successful company is the ability to adapt to new challenges as they arise. The gender imbalance is no different.
Many people assume that because algorithms process information using computational logic, they cannot behave in a discriminatory manner. Indeed, one of the most persuasive arguments for the potential of AI lies in the fact that, if implemented correctly, it could iron out human bias from vital decision making processes. For example, an effective hiring tool could reduce the likelihood of men receiving preferential treatment when interviewing for jobs in certain industries.
However, the reality is not so straightforward. AI works by recognising patterns in vast data sets that would otherwise be too unwieldy for human analysis. This does mean however that discriminatory logic can end up wired into the functioning of AI if the system is insensitive to pre-existing biases in the data.
This flaw was painfully exposed at one of the biggest technology companies in the world, after Amazon employed an experimental AI-driven hiring tool in order to efficiently review CVs. However, Amazon found that the hiring algorithm was systematically discriminating against female candidates, partially because they didn’t use forthright verbs like ‘captured’ and ‘executed’ as frequently as male applicants.
Essentially, the algorithm began to privilege male traits based on the resemblance to CVs from men – as the CVs of previously successful applicants were overwhelmingly male. The system even went as far as to penalise applicants for attending an all-women’s college or for including the word ‘women’s’, as in ‘women’s chess club captain’, in their CV.
However, the problem of gender bias isn’t confined to pre-existing issues contained within historical data. All-male developer teams are liable to inadvertently create ineffective AI on account of their failure to pre-empt how their algorithms might produce discriminatory outcomes. It goes without saying that to prevent an issue from having dire consequences you have to be aware it exists to begin with.
Failure to do so has led some of the biggest technology companies to release poor products that have attracted criticism and caused reputational damage. For example, in 2016, Microsoft had to pull the plug on its smart chatbot, Tay, after the AI began expressing extreme and misogynistic views.
The system’s failure illustrates how AI might promote offensive messages or materials if its creators are insensitive to the need to filter out certain content. In essence, AI tools developed by teams of homogenous individuals might be blind to the way their technology could be misused. This is because they lack the experience required to think critically about the different ethical implications of the technology they develop.
As a tech entrepreneur myself I know how important it is for end users to be confident in your ability to create safe and well engineered platforms. While a company like Microsoft can be confident in its ability to recover from such a public setback, cases of AI gone awry can be terminal for a less established entity.
The best way to prevent this type of bias from infecting AI technology is to involve people from under-represented backgrounds at every stage of the development cycle. Diverse and representative development teams are better able to pre-empt problems that otherwise wouldn’t be evident, until the system had begun discriminating against people in the real world. Having gender-balanced development teams is the first line of defence against biased algorithms and their potentially devastating consequences.
Even if every tech company decided that addressing the gender imbalance was its most pressing concern, it would still not be solved overnight. This is because the present situation is so complex and deeply embedded into the culture of tech. In essence, a whole host of structural factors aggregate to make a career in AI far more attractive to men in general. Consequently, real progress can only come about if industry stakeholders commit to a multi-faceted strategy that addresses the causes of gender inequality from a number of angles.
The first step is to expand the available talent pool by increasing the number of female applicants for developer roles. According to the 2018 AI Index, men currently make up 71% of the applicant pool for AI jobs in the US – companies cannot hire more women without first receiving a greater volume of female applicants.
To achieve this, tech companies have to begin working with top universities and government agencies to ensure that computer science departments have access to the latest equipment and the top personnel. This will have the added benefit of helping to promote the tech industry to female STEM graduates instead of losing many to other fields like finance and consultancy. According to a recent survey conducted by EY, 41% of respondents nominated promoting female participation in STEM degrees as the top policy initiative to encourage diversity in the most competitive industries.
Companies seeking to lessen the impact of gender bias have to start by radically changing their workplace culture. It is estimated that 56% of women who enter the tech industry leave before they attain mid-level jobs. As such, simple reforms like allowing for flexible work hours and having clear internal processes around the investigation of harassment and discrimination, can dramatically improve a company’s ability to maintain a gender-balanced workforce.
However, where companies can make the most immediate impact is by internally transforming so that their workplace is more accommodating towards driven, technologically proficient women. At present, retention is probably the biggest problem facing the industry with only 44% of women who enter the tech industry going on to attain mid-level jobs.
For lasting change though more attention and resources have to be aimed at the C-suite level where women are chronically underrepresented within AI. This isn’t simply about giving a small number of women top jobs, rather it’s a necessary step towards creating a positive narrative around women in tech. Ultimately, this can only be achieved by raising their profiles and celebrating female researchers when they drive the field forward. The importance of this cannot be overstated, as without a number of women to admire, future generations of women will continue to see the AI industry as closed to them
I believe that everything I’ve laid out is achievable. Provided AI companies take this seriously and recognise that any short-term disruption it may cause will be massively outweighed by the enormous growth potential of a gender balanced industry. The challenge for industry leaders is to start reflecting on whether the AI they develop is meeting the needs of all end users, whether they be male or female.
There’s never been a more exciting time to be part of the AI industry with venture capital funding for AI startups currently at record levels. This not only means a fantastic opportunity to develop cutting-edge technology but also to change perceptions of the industry and ultimately ensure that AI becomes an engine for positive change.
Originally posted here