Applying AI in an Ethical Way: Is it Possible?


Share on LinkedIn

Businesses of all kinds are embracing the AI revolution, and for good reason. AI technology promises to not only automate routine tasks — giving human staff more time and energy for higher-order work — but also boost productivity and performance, leading to growth and a better bottom line.

Yet, the widespread adoption of AI also raises new risks, especially for customers. In my experience as an AI specialist, however, businesses can take an ethical approach that mitigates these problems.

AI has a bias problem

Recent research suggests that AI replicates problematic biases against individuals in historically marginalized communities. Since AI learns from gathering data from the past and using it to make decisions in the present, it can reify previous discriminatory practices and entrench them for the future.

For instance, ProPublica has found that law-enforcement risk assessments, which can be used to make bond decisions and during sentencing, are biased against people of color. The algorithm systematically assessed Black defendants as likely to commit future crimes, while gauging the risk to be low for white defendants. Even when controlling for other factors, the authors write that “Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime, and 45 percent more likely to be predicted to commit a future crime of any kind.”

In another example, three companies’ AIs for facial recognition proved more unreliable at identifying women’s faces than men’s, as well as darker faces than lighter ones. The category they were worst at identifying were, of course, those of women of color. The error rate between lighter-skinned males and darker-skinned females was as high as 34 percent.

Studies have even shown that AI programs for recruiting have discriminated against women and parents.

Correcting AI bias

For this reason alone, AI should never be left to operate unsupervised. Companies should not assume that AI innately treats people equitably, but rather consider the possibility that it doesn’t.

The good news is that AI experts can help with bias detection by looking for patterns in the AI’s outcomes. They can also provide mitigation techniques, investigating the system’s processes to correct societal prejudices and ensuring fairness in their outputs.

Ensure transparency and accountability

This is also why businesses should ensure transparency and accountability. Customers and other stakeholders need to know when AI algorithms and decision-making processes are in play so they can help draw attention to biased or unethical outcomes.

While business leaders might fear this scrutiny and accountability at first, the brand’s willingness to address problems head-on is the best way to tackle the problem of potential bias and signal corporate social responsibility. At a time when consumers increasingly prioritize corporate social responsibility, this stance is an investment in the future, inspiring customer loyalty in the long run.

Protecting personal information

Furthermore, to implement AI ethically, businesses must prioritize users’ privacy. Whenever consumers’ information is gathered, their informed consent should be obtained and safeguarded. As such, businesses should adopt stringent data protection measures and invest in cutting-edge cybersecurity.

Ethical frameworks

Much like developing an organization’s mission statement or code of conduct, brands should create ethical frameworks for incorporating AI that help the technology uphold human values and promote societal well-being. At the end of the day, businesses turn to AI to serve human beings, not the other way around.

Having these clear, ethical guidelines in place will not only guide the development and deployment of AI within organizations, but also foster trust with key stakeholders, including those all-important customer groups. These policies can also be called upon in case of need and used to minimize harm.

When it comes to the future of implementing AI, brands must take an explicitly ethical, transparent, and human-centered approach to this revolutionary technology. In this way, they can harness the technology’s immense potential while alleviating the bulk of its associated risks.

Ed Watal
Ed Watal is an AI Thought Leader and Technology Investor. One of his key projects includes BigParser (an Ethical AI Platform and Data Commons for the World). He is also the founder of Intellibus, an INC 5000 “Top 100 Fastest Growing Software Firm” in the USA, and the lead faculty of AI Masterclass — a joint operation between NYU SPS and Intellibus. Forbes Books is collaborating with Watal on a seminal book on our AI Future. Board Members and C-level executives at the World's Largest Financial Institutions rely on him for strategic transformational advice.