How Companies Can Learn to Use AI Responsibly

What is responsible AI, and why should companies worry about it instead of just focusing on competitive advantage and long-term profits?

Image by Gerd Altmann

The pros and cons of artificial intelligence have been debated in scientific circles, the media and the arts for several generations. Time will tell how these hopes and concerns play out over the next few decades, as adoption of AI projects is likely to soar.

In fact, according to a Cognilytica survey of public and private sector companies, almost 90% of respondents will have some sort of AI implementation in progress within the next two years. This raises the decades-old question: Will AI improve our quality of life or diminish it?

We may never reach consensus on an answer to this question, but today, many individuals and organizations around the world are committed to ensuring AI will make the world a better place for all. Over 190 organizations have announced some form of ethical or “responsible AI” principles, for example, and many companies with mature AI programs, including Microsoft, Google and Facebook, have established internal processes to promote responsible AI.

But what is responsible AI, and why should companies worry about it instead of just focusing on competitive advantage and long-term profits?

The principles of responsible AI

Generally, responsible AI refers to the following principles:

  • Fair results – AI applications should deliver equitable results for all intended beneficiaries under all circumstances. Biased or poor-quality data can skew results, leading to failed projects and harm to individuals.
  • Data privacy and transparency – Data, including customer and employee data, is the fuel of AI. Protecting private information is essential at every step of the AI process, as is the ability to validate the protections put in place.
  • Fair treatment of data supply chainMillions of people around the world collect and label the data used for AI, and it is vital that these people be treated fairly.

From a business perspective, the most important reason for abiding by responsible AI principles is better business outcomes. AI projects that work optimally for the greatest number of people will see the widest adoption and generate the most revenue. Failed projects – or projects that harm people – can lead to financial loss, lawsuits and brand damage.

From a social perspective, incorporating responsible AI principles into practice helps to ease the often-cited fear of AI intelligences becoming a threat to humans – and instead ensure that everyone benefits equally from AI technology. As part of this, the AI industry needs to focus on establishing a viable employment model to mitigate potential economic upheaval arising from AI-powered automation as well as treating the millions of data annotators around the world fairly.

7 strategies for ensuring responsible AI deployment

Even with the business and societal impact and benefits of responsible AI being known, ensuring companies create AI programs with ethics in mind still remains a challenge – in part, because organizations don’t know where to begin.

These seven strategies for building a responsible AI organization provide that place to start, no matter the cultural context they operate in.

1. Choose the right executive to lead the ethical component of AI initiatives

Organizations with mature AI programs recognize that responsible AI initiatives must be led by a C-level executive committed to ethics and social responsibility. Many organizations have recently created the role of Chief Equality Officer or Chief Diversity Officer, and Salesforce even has a Chief Ethical and Humane Use Officer. Only a strong C-level executive will have the influence to ensure responsible AI gets embedded into the fabric of an organization.

2. Create the right multi-stakeholder teams for AI projects

AI teams should be decentralized and share responsibility for AI across the organization. Technical roles include data scientists, engineers and QA professionals. Non-technical roles include executive sponsors, domain stakeholders, project owners and project managers. Clearly establish how the ethical oversight will work. Most organizations will need to retrain some employees or hire new employees to ensure a high-functioning AI team with sufficient ethical oversight.

3. Build a diverse workforce

Create the greatest possible diversity among the teams inside the organization that are devoted to building and deploying AI, in order to decrease the likelihood of the AI embodying implicit bias or unexpected biases that derive from unrepresentative workforces.

4. Build ethics requirements into the AI project lifecycle and AI CoE charters

Whether it is part of each project’s business case or an overarching requirement defined by an AI Center of Excellence (CoE), ethical AI requirements must be clearly stated, along with mechanisms for ensuring and assessing the implementation of those requirements.

5. Build metrics-driven organizational learning around responsible AI

Without metrics, the progress toward building responsible AI into the fabric of an organization cannot be measured. Develop metrics related to accountability, bias and security, and perform regular audits to assess progress. The World Economic Forum is also working on the feasibility of developing third-party certification of responsible AI.

6. Source and work with unbiased, diverse training data

AI projects based on comprehensive and unbiased data will deliver equitable results for all intended beneficiaries under all circumstances. A project based on incomplete or biased data won’t deliver equitable results and can actually harm people. Whether to ensure non-discrimination in lending or that a car’s GPS routing system works for all potential users, ensure your AI project is powered by the right amount of the right kind of high-quality data.

7. Ensure responsible principles are upheld across the AI supply chain

Responsible AI starts with a Crowd Code of Ethics for those helping to collect and label the data. The ethical principles include fair pay in respective markets, inclusion to ensure a diversity of perspectives, respect for privacy and confidentiality for data annotators, and more.

In 2021, with AI projects taking center stage at so many companies, we expect responsible AI to become a required topic of discussion in every boardroom.

Boards that mandate that AI governance programs incorporate the principles of responsible AI will enjoy better business outcomes, protect their brands, increase trust by the public, and help ensure that history will record that AI truly improved our quality of life.

About Shaping the Future of Technology Governance

The World Economic Forum and Appen, a global leader in high-quality training data for machine learning systems, partnered together to design and release standards and best practices for responsible training data when building machine learning and artificial intelligence applications. As a World Economic Forum Associate Partner, Appen is collaborating with industry leaders to release the new standards within the “Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning” platform, which enables a global footprint and guidepost for responsible training data collection and creation across countries and industries. The standards and best practices for responsible training data aim to improve quality, efficiency, transparency, and responsibility for AI projects while promoting inclusivity and collaboration. The adoption of these standards by the larger technology community will increase the value of – and trust in – the use of AI by businesses and the general public.

Wilson Pang is Chief Technology Officer at Appen. Kay Firth-Butterfield is head of AI and machine learning and a member of the executive committee at the World Economic Forum.

Related Posts
See All

Generous Tax Subsidies for Sustainable Aviation Fuels in the U.S.? Yes, But Details Matter.

What is responsible AI, and why should companies worry about it instead of just focusing on competitive advantage and long-term profits?

In Cancer Detection, AI Saves Critical Time

What is responsible AI, and why should companies worry about it instead of just focusing on competitive advantage and long-term profits?

7 Technologies for Fighting Climate Change

What is responsible AI, and why should companies worry about it instead of just focusing on competitive advantage and long-term profits?

Techonomy 23 to Focus On the Promise and the Peril of AI

What is responsible AI, and why should companies worry about it instead of just focusing on competitive advantage and long-term profits?