Balancing the Advantages and Biases of AI in the Workplace

Companies forging ahead with artificial intelligence need good governance and a plan to balance the skepticism that threatens to undermine AI initiatives.

Artificial intelligence-based tools already exist to help companies source and screen applicants (Textio, ZipRecruiter), monitor employee behavior (Humanyze), and assess their performance (IBM Talent Management). These and other machine learning-driven applications analyze employee sentiment; generate predictions about employee flight risk; and make decisions about hiring, productivity, and even promotions. As such, businesses across many industries are moving ahead aggressively with plans to implement artificial intelligence (AI) in pursuit of better decision-making, reduced costs, and an improved ability to react to and integrate new information. But just as efforts to reduce inequalities in the workplace seem to be gaining traction in some sectors, experts say AI has the potential for introducing and perpetuating bias, especially in the area of human resource management. 

What Research Shows

Data suggest that many employees are ready to work in an AI-driven world. In a 2019 survey conducted by Dale Carnegie Training of more than 3,500 employees across 11 countries, more than half said they would be at least somewhat likely to apply for a position where their activities were monitored continuously for the purpose of tracking, evaluating, and improving productivity, and even more said they were comfortable with their company using AI to plan their career path (Dale Carnegie & Associates Research on attitudes toward Artificial Intelligence: a survey of 3,586 full-time employees conducted in the U.S., India, China, Taiwan, Germany, UK, Sweden, Norway, Italy, Canada, and Brazil, January 2019). 

But not all employees feel the same. The research also shows a wide gap between the level of concern expressed by individual contributors versus those with higher-level roles. When asked if they’d be willing to accept an AI-generated performance appraisal where the criteria were not fully transparent, 47 percent of those at the director level or above said they’d be at least somewhat likely to do so, compared with only 18 percent of individual contributors. 

Apprehension Toward AI in the Marketplace

While some leaders embrace the idea of using AI to help them make decisions about human capital, others hesitate—and perhaps with good reason. High-profile examples of AI fails aren’t hard to find. For example, Facebook was forced to announce stronger measures to prevent advertisers from using its AI-driven ad targeting options to exclude potential job applicants based on race or ethnicity. And it wasn’t long ago that Amazon terminated its AI-based candidate screen system after discovering it was disadvantaging female candidates for technical positions, even though research from Stanford suggests investors prefer tech companies with gender diversity

Apprehension in the workplace is real. Research reveals that more than 6 in 10 employees admit to being at least moderately concerned about biases built into AI systems by humans. And just because AI can predict something doesn’t mean it should be allowed. That’s the tension behind the predictive power vs. appropriateness tradeoff. To use a simple example, in an organization using AI to determine how best to target its hiring efforts to fill its future leadership pipeline, predictive power may suggest the desirability of hiring more white males. AI may correctly determine from historical data that they have demonstrated the best track record in making it to leadership positions at the company. However, most humans would have a strong aversion to the appropriateness of using that predictive power, recognizing that racism and gender bias have had an impact.

AI Biases

Bias in AI is a technological problem rooted in society and history. Experts suggest that it will happen: Even well-intentioned programmers writing algorithms that “appear reasonable and nondiscriminatory on the surface” have been shown to have the potential for bias (Osoba, O., and Wesler, W. (2017). “An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence,” RAND Corporation. Retrieved from https://www.rand.org/pubs/research_reports/RR1744.html). Complicating matters, AI is incredibly good at inferring missing information, so hiding gender, race, age, or other sensitive data around protected statuses isn’t the answer. Some researchers suggest it may even be harmful, as it makes identifying bias more difficult (Williams, B., Brooks, C., and Shmargad, Y. (2018). “How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications,” Journal of Information Policy, 8, 78-115. doi:10.5325/jinfopoli.8.2018.0078.)

Clearly, there is the potential for legal ramifications, as well. In fact, the Facebook changes came about as part of a settlement of legal action brought by the National Fair Housing Alliance, the American Civil Liberties Union, and others. That means company leaders can’t exclude the possibility of being held accountable for what algorithms are doing—whether they understand them or not.

Leaders aren’t wrong to be cautious. Yet, businesses cannot afford to ignore the potential of AI in the race to remain competitive in the workplace. And given the stakes involved, work certainly will progress to find technological ways to help root out bias, regardless of its origin. 

AI Governance

In the meantime, companies forging ahead with AI need good governance and a plan to balance the skepticism that threatens to undermine AI initiatives. The Dale Carnegie survey offers three suggestions that are important to driving positive feelings toward AI among employees: 

1. A clear understanding of what the AI is intended to do and how it will work

2. Having confidence that they can develop the skills needed to adapt to the changes resulting from AI

3. Trust in the organization’s leadership. 

To achieve those outcomes, leaders must remember that employees may see things differently than they do when it comes to AI. As a result, leaders will need the communication skills to address those differing perspectives with empathy. It will take courage to examine processes and algorithm outcomes proactively, and admit to and deal with bias. Additionally, they’ll need to be transparent regarding the technologies being implemented and how they will affect stakeholders, along with having the foresight and commitment to help their people develop the skills they will need for the future.

Communication. Empathy. Courage. Commitment. It seems that the capacity to move boldly ahead with artificial intelligence—with all its promise and potential pitfalls—will depend heavily on human, not solely technology, skills.

Mark Marone, Ph.D., is the director of Research & Thought Leadership for Dale Carnegie & Associates, where he is responsible for ongoing research into current issues facing leaders, employees, and organizations worldwide. He has written frequently on topics related to leadership, sales, and customer experience and has co-authored two books on sales strategy. For more information, contact him at: mark.marone@dalecarnegie.com.