There’s more to workplace diversity than eliminating hiring bias

two females in a meeting being interviewed

Written by Anna Holland Smith, Programme Manager - THG technology and Innovation Accelerator, The Hut Group

Choosing new hires without knowing their names is a start, but it won’t solve tech’s diversity problem

The need for diversity in tech is not merely a question of fairness; it is about the need for technology to reflect its users rather than only the teams that have designed it. The failure of organisations to ensure they promote diversity in their workforce may result in products that do not appeal to or cannot be used by a large part of the constituency.

Diverse perspectives also serve as a preventive measure against costly and embarrassing errors. Some famous missteps in recent years that have been a direct consequence of the lack of diversity include:

  • In 2015, Google released a facial-recognition feature that embarrassingly misidentified a couple of black faces as gorillas. And, at the end of 2016, Google was forced to implement changes to an algorithm that provided anti-Semitic, racist, and sexist question suggestions via its autocomplete function.
  • In April 2016, Snapchat released a Bob Marley photo filter that clearly amounted to a form of blackface. Mere months later, it released an “anime” filter that was widely regarded as promoting racial stereotypes of Asian people and interpreted as a form of yellowface. The repeated racism saw Snapchat fall on the wrong side of the racial lens and drew considerable criticism and attention to the fact that Snapchat (unlike virtually every other major social network) had failed to release its diversity statistics.
  • When Apple released its HealthKit app in 2014, it boasted plenty of features, including the ability to record blood pressure, steps walked, calories, sodium intake, respiratory rate, and even blood-alcohol level. It neglected, however, to incorporate the ability to track users’ menstrual cycle.

Many people are at least anecdotally familiar with economists Claudia Goldin and Cecilia Rouse’s research with orchestra auditions. In the 1970s, members of orchestras were approximately 95 percent male, and a commonly used audition process required musicians to perform in front of a panel. However, a move toward anonymous auditions—sometimes called “blind” auditions—saw the proportion of women in orchestras rise to around 25 percent. An anonymous audition meant the panel could only judge candidates on the quality of their performance; there was no room for gender bias or other prejudices.

Organisations need to start addressing the bias itself.

Hiring in a way that candidates are unidentified aims to eliminate bias from the recruitment process, but can it prevent discrimination? It may remove unconscious bias from the initial selection process, but face-to-face interaction is inevitable at some stage. Typical anonymous hiring practices serve only to paper over the bias in the first instance; the traditional interview process that usually follows means decisions will ultimately be made based on some aspect of human interaction, which can incorporate the biases of those involved.

Even if hiring anonymously could eliminate all unconscious (or intentional) biases from the recruitment process, it does little to remove the same biases that exist in the workplace. Improved metrics at a hiring level fail to address a workplace culture and its biases that can result in retention problems that contribute significantly to the lack of diversity. Can hiring candidates anonymously really be seen as solving the problem, or is it merely deferring the discrimination and biased thinking, moving it further along the pipeline?

Research would seem to indicate that whilst approaches like anonymous hiring may bring more women into tech, bias often inhibits the progression of those women. In 2014, Kieran Snyder conducted a study of performance reviews from men and women working in the tech industry. She found that women were more likely to receive critical feedback than men were, and women were far more likely to receive feedback based on personality traits that were often framed negatively. For example, men were considered to be acting “confident” or “assertive,” whereas a woman exhibiting the same behaviours was being “abrasive.” Common words used in critical reviews of women included “bossy,” “aggressive,” “emotional,” and “irrational.”

Perhaps the most worrying revelation from Snyder’s research was that the gender of the manager providing the review was not a determining factor: Female managers were just as likely to use those adjectives to describe other women as male managers were.

Research from Stanford University further emphasises the effect of barriers to advancement through bias and gender stereotypes. Their research has found that managers perceive that women perform better in team-based or collaboration competencies, whereas men are viewed as more independent. The effect of these assumptions is that men and women are being placed on divergent career paths, with men generally being earmarked for leadership roles more .

Removing identifying information from an initial application will only have the desired effect on workplace diversity if it’s incorporated in addition to other changes. Organisations need to start addressing the bias itself and the roots of that bias by providing all their employees—even those not involved in hiring—with the necessary training and insight to recognise their own biases and correct them appropriately.

The solution to diversity requires more than establishing a bias-free system for feeding employees into a company. Companies must also work to remove biases from the workplace culture and promotion processes if they are to retain the necessary diversity among their workforce and have it reflected at all levels.


Originally posted here

More thought leadership

Comments are closed.