Why Achieving AI Transparency is Critical for its Future Success


Share on LinkedIn

The rise of AI is creating new efficiencies that are drastically transforming jobs and industries. From automating data management to natural language processing and autonomous driving, AI has the power to touch every aspect of our lives.

The secret sauce that is being used to build the revolutionary models and algorithms behind AI, however, remains a mystery. Who is making the decisions about what an autonomous car should do if it has to decide between hitting a person or dog on the street? While a human brain can quickly ponder these types of emotional decisions in milliseconds, an algorithm will base its decision on programming. The U.S. government is forming committees to explore AI regulations and industries are quickly mobilizing to set their own standards. But for AI to achieve the level of trust needed from the public, this moment in time is critical.

Technology leaders need to be open about how AI works and why programming decisions can have massive consequences when a careful methodology that includes input from diverse sources is not applied.

The question of who’s behind the technology has often been ignored until recently when artificial intelligence began its rise to ubiquity. That’s why most AI programs need to consider having some kind of disclosure — much like an ingredients label similar to what can be found in products.

Just like most of us like to know what we are consuming, businesses and consumers alike need to have an understanding of how AI works and the need for greater transparency is critical to the technology’s continued adoption and acceptance. The fear of machines taking control of our everyday decisions is reason enough to make AI transparency a critical component to success but even more so we need to have disclosure about how systems are gathering information and how they will use it. In the near future, no one will be able to pass on an AI system without full disclosure.

The more AI becomes part of the fabric of our lives, the more the following key components are necessary for consideration when developing AI software.

1. Ensure fairness
A mature AI system codifies the rules we as a society adhere to. But several questions dog the would-be rule makers, like are the rule makers themselves fair? For instance, when Stanford University announced its artificial intelligence institute, some noticed that not one of its 120 members was black.

Talk of bias in AI systems is now old hat. What’s new is the mechanisms activists use to draw attention to the issue. Stanford, for instance, provided the press with images of existing members of color who did not appear on the website. Such instances inevitably summon talk of a tech cabal designed to foist the technology on an unsuspecting public. Plenty of organizations are working towards making AI decision-making fairer.

2. Provide the means for a two-way dialog
Such talk can be dismissed if the tech community takes the necessary steps to show it is cognizant of the issue. The next thing tech leaders need to do is ensure that whoever is affected by the decisions AI makes can easily get in touch with the makers of AI.

At the very least, makers of AI systems should include a means to get in touch with them. If it’s a phone number, then it should be staffed. If it’s an email address, the company should ensure the customer gets a response promptly.

3. Collaboration with machines
Research from Accenture recently underscored that AI helps people do their jobs better. The answer is deeper collaboration between man and machine to tackle their most daunting tasks. This synergy between human and machine is addressing the shortfalls of each: Machines are getting better at explaining things while humans are getting used to interacting with machines effectively.

In this current environment, machines have more power than ever. But let’s not lose sight of who creates these machines and whom they are designed to ultimately benefit. If we don’t as an industry do our best to be as transparent as possible when it comes to AI — adoption will continue to be plagued by fear and misunderstanding.

Rob May
Rob May is the CEO and Co-Founder of Talla, providing AI Powered Prediction, Automation, and Augmentation to Service and Support Teams. Previously, Rob was the CEO and Co-Founder of Backupify, (acquired by Datto in 2014). Before that, he held engineering, business development, and management positions at various startups. Rob has a B.S. in Electrical Engineering and a MBA from the University of Kentucky.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here