2025 Inspiring Workplaces Awards now open for entries
Get started
Date posted: 13th February 2024

13th February 2024

Incorporating DEI into your company’s AI strategy

Incorporating DEI into your company’s AI strategy

This article discusses the integration of diversity, equity, and inclusion (DEI) into AI strategies for companies, and emphasizes the importance of ensuring that artificial intelligence does not exacerbate existing inequalities. To achieve this, businesses should embed DEI principles into the design of AI systems, involving diverse stakeholders and training the AI on inclusive data. Additionally, upskilling programs should prioritize DEI, enabling underrepresented groups to benefit from AI opportunities. Ethical AI can also identify and rectify disparities, such as pay gaps, while enhancing workplace accessibility for people with disabilities. Ultimately, a people-centered AI transformation is essential for long-term growth and success.

This article was written by Andy Baldwin and published in the Harvard Business Review:

Microsoft founder Bill Gates says that AI is as revolutionary as the internet. Others argue that it is overhyped. In my view, it’s hard to overstate the potential of artificial intelligence (AI) generally and generative AI specifically. In many ways it’s a game-changer that will better enable businesses in every sector to drive innovation, productivity, and revenue.

What’s more, plenty of business leaders share my perspective. Nearly two-thirds (65%) of CEOs surveyed for the 2023 EY CEO Outlook Pulse Survey saw the technology as a force for good.

Yet AI cannot be a force for good if it serves to exacerbate existing inequalities in the workplace and society more broadly. If companies are to use automation to its maximum potential, they need to think carefully about who is going to benefit from the new roles and career opportunities that the technologies will inevitably create. Will it just be the same demographics who had access to these opportunities in the past? If so, what happens to historically marginalized groups? Will they become further disadvantaged by automation or potentially even displaced from the workplace altogether?

Advancing diversity, equity, and inclusion (DEI) is a great passion of mine. I spend a lot of time thinking about bias and equity as EY global managing partner-client service and a member of the EY global DEI steering committee. I believe that balancing DEI with automation in the deployment of AI systems is not a “nice to have” — it is a business imperative.

Efforts are being made globally to establish guidelines, regulations, and ethical frameworks to ensure that responsible AI is developed and deployed. Organizations and governments are increasingly recognizing the importance of addressing bias, transparency, accountability, and fairness in AI systems. Businesses will therefore be expected to deploy automation in responsible ways that drive diversity, equity, and inclusion within their organization, while mitigating AI-related risks and using the technologies to unlock the full innovation capacity of their workforce.

Here are three strategies that businesses can apply to achieve an effective balance between automation and DEI as they implement AI systems. These strategies are informed by my own experiences of introducing ethical AI frameworks into a global organization, taking into account industry best practice and governmental initiatives, as well as EY’s own research into the benefits of diversity in technology and AI development.

Embed DEI into the design of your AI system

We hear a lot about the risk of algorithmic bias, and the risk is indeed genuine. Without careful design, testing and guardrails, AI can amplify and perpetuate existing biases. But this doesn’t have to be the case. In fact, a string of studies — covering mortgage lending, job screening, and justice decisions, among other professional activities — show that well-designed algorithms can actually be less biased than their human counterparts.

To create a well-designed AI system, it’s essential to involve a wide range of stakeholders, from different demographic groups and backgrounds. This will help establish a system that is fair and transparent, respects diversity and different cultures, and can be easily accessed by all user groups. Also, training the AI system on data that reflects the complexity, diversity and cultural richness of the real world is critical. That data should cover all potential use cases and be representative of all potential users.

A good example of a context where a poorly designed AI system can amplify bias is health care. Research by Imperial College London has highlighted that because the data that is used to train algorithms tends to be unrepresentative of minority ethnic groups, there is a risk that AI systems could exacerbate existing health inequities. So, in the case of skin cancer, for instance, if images of white patients are used to train algorithms to spot melanoma, Black patients may experience missed diagnoses that lead to them suffering more life-threatening health outcomes.

Finally, it is key to employ development and testing teams who are diverse in their experiences and demographics. A diverse team will be better equipped to recognize and challenge bias than more homogeneous groups, therefore reducing the risk that biases become unintentionally programmed into AI technologies.

Incorporate DEI into upskilling and training programs

Any organization that invests in AI also needs to invest in comprehensively upskilling its workforce to understand how to deploy, use, and manage AI tools. Employees should also be aware of AI-related risks, including algorithmic bias against certain demographic groups.

Read the full article to find out more on how you can incorporate DEI into upskilling and training programs and boost DEI in your workplace using AI.

Our new consulting division can help you build an inspiring culture of Belief and Belonging in your organization.

Find out more here