The Ethics Of AI In The Workplace

1
106

Share on LinkedIn

More than nine in ten leading businesses have ongoing investments in artificial intelligence (AI), including non-tech giants like McDonald’s, CVS Health, and Pfizer. More than 90% also report increasing their investment and incorporating AI into more of their processes.

While the use of AI is nothing new, recent advancements have made it easier for businesses to increase efficiency, productivity, output, and, ultimately, profit margins. Instead of being a concept portrayed in sci-fi movies, AI is now common in both the workplace and in the homes of many Americans, summarizing data, providing personalized content recommendations, and increasing security.

But when this technology started assisting humans in hiring decisions and providing insight to guide health treatments, ethics was bound to become a concern. Because AI has virtually no government oversight, many businesses are left scrambling to figure out how they can balance AI-induced efficiencies with complex moral dilemmas.

In this article, we cover some of the most common AI ethics concerns for businesses and tips for employers to successfully navigate through them.

AI Ethics Concerns In The Workplace

From regulatory compliance to discrimination and everything in between, many concerns about normalizing AI in the workplace are surfacing. Here are some areas we’ve seen spark debate about ethical AI.

AI And Bias

By removing humans from the equation, you’d think that unconscious biases would go away by using supposedly objective data gathered by AI. But because the data used to train the algorithm was created and picked by humans, AI can easily become an extension of the injustice that’s already prevalent in today’s world.

Businesses need to be especially careful about using AI algorithms in their hiring processes, which 55% of HR professionals think will become the norm in the next five years. For example, Amazon created a recruitment tool using AI that was based on applicant data over a ten-year span. The company later scrapped the tool entirely after finding out that it was biased against women, since most of the company’s past applicants were men.

AI And Job Replacement

With AI automation of certain tasks or processes, business leaders need to consider how this technology will impact their workforce. For example, if jobs are being replaced, what will happen to employees currently in those roles? Will they be laid off or reskilled to tackle other roles within the organization?

Employers also need to be aware of the fear AI has already created in their teams, considering more than one in four American workers fear AI will take their job. It’s critical to remind employees that while AI may replace some redundant tasks, it’s also creating new opportunities.

For example, in fields like health care and agriculture, new roles are appearing for niche data analysts who can interpret data gathered by AI to maximize efficiencies and improve outcomes. AI ethics expertise is also an increasingly in-demand skill.

AI And Human Reasoning

When AI is used in decision making, it’s still important that leaders are able to explain why the decision was made. These decisions may be life-altering for the people who are weeded out of jobs or financial programs, so organizations must be prepared to answer questions and clearly communicate the reasoning behind each decision.

The European Union’s General Data Protection Regulation is employing this to help ensure that if someone is denied a home loan due to an AI recommendation, the issuing organization is required to provide a reasonable explanation for the rejection.

AI and Privacy, Data Security, and Compliance

AI is developing at an unrivaled speed, which means business leaders not only need to know the current user data AI regulations but also those on the horizon. When dealing with AI ethics in personal privacy and data security, businesses must ensure:

  • Personal data is securely processed and stored
  • The collection of user data has limitations
  • Transparency and fairness
  • Data is only collected and used for the specified purpose

Additionally, AI is being applied in governance, risk, and compliance programs, which can present legal challenges when it’s not properly governed. Businesses should create internal procedures and policies to ensure AI compliance tools are managed ethically and effectively.

How Can Businesses Navigate AI Ethics Challenges?

With so many challenges to consider when it comes to implementing AI, businesses may struggle to know where the technology should and shouldn’t be used. Here are four key points for business leaders to consider when making these decisions.

Value Human Decision-Making

Part of the appeal of AI-driven decision-making tools is that they’re able to quickly sort through large quantities of information and make intelligent decisions. But this still begs the question: Should these systems be the ones to make the final call?

Because we’re still learning about the power of this technology, we don’t believe that AI has rendered human reasoning indispensable. Humans have benefits that these systems don’t, including the ability to involve emotions like empathy. Be sure to consider your oversight processes for AI-led recommendations.

Address Employees’ Job Security Fears

Many workers are worried that AI will take over their job, leading to job insecurity. Research shows that employees who are worried about losing their job to AI have a 27% lower intent to stay with an organization than those who feel secure. Gartner estimates that businesses will lose $53 million annually due to lost productivity over fears of AI.

Consider how your organization can address these fears head-on to avoid the dangerous AI rumor mill.

Keep It Transparent

When it comes to implementing AI, silence is a business’s worst enemy. Decisions around implementing AI tools should always be put out in the open, with benefits clearly explained and concerns acknowledged.

It’s crucial to communicate these decisions with your board, employees, and other key stakeholders so they don’t feel left in the dark. Better yet, make it a two-way conversation, as those who are on the ground doing the work may have brilliant ideas for making better use of the technology. This also helps employees feel heard, which is important for retention.

Be Responsible

Ultimately, businesses that want to wisely navigate AI ethics need to use the technology responsibly. This means:

  • Establishing ethical guidelines and an ethical oversight committee to ensure ethical, legal, industry, and regulatory compliance
  • Conducting regular audits and assessments
  • Ensuring privacy, data protection, and consent
  • Training and educating team members on responsible AI use
  • Identifying and mitigating risks
  • Acknowledging and remediating biases
  • Taking accountability for errors
  • Ensuring its use aligns with company values

Tech giant Dell set a good example for creating a responsible AI framework. The company created and published its own principles for ethical AI guide for its employees to set boundaries and expectations on how AI should be used for good.

Adopt AI Responsibly With Expert Business Transformation Consulting

Mythos Group has deep expertise to help organizations like yours navigate today’s most complex business and technological challenges with ease. Contact us for strategic change management and digital transformation guidance that will keep you confident in your decisions.

Amit Patel
As the Founder and Managing Director of the Mythos Group, Amit has led a variety of global business transformations for Fortune 100, Fortune 500 and startup companies. He formerly spent time in managerial positions at Scient, Accenture, PwC, PeopleSoft and more.

1 COMMENT

  1. Hi Amit: thank you for this thoughtful post. When implementing AI-supported processes, it’s important for executives and project teams to remember that unethical choices are always on the table. Therefore, the hubris underpinning the statement, “that type of thing could never happen here,” needs to be expelled from internal conversations about AI.

    First, it’s important to unpack what “that type of thing” means. Nefarious intent? Dishonesty? Customer exploitation? What about the broader concern of “stakeholder harm”?

    Second, while an ethical oversight committee is a vital element for mitigating ethical risks, the presence of such a committee ensures nothing if the committee lacks authority to take meaningful action. Their first task is often the most difficult one: to recognize and manage ethical risk in the first place. That requires an organizational culture committed to protecting its stakeholders from harm – whether they be physical, financial, or emotional.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here