Is AI Right for Your Company?

0
33

Share on LinkedIn

There’s been so much written extolling the uses of AI. Certainly, it’s exciting – filled with possibility and mystery, similar to the early days of the World Wide Web. With possible uses as chatbots to serve customers, an illustrator to develop visuals, a content-creation tool for sales and marketing, a facial recognition tool for customer protection, it’s possible to become our future, now.

Everyone is excited, tempted by the seemingly infinite possibilities. But should companies adopt it? What use cases would work best? Are we set up to manage data breaches or operations breakdowns? Are our stakeholders equipped to handle measurement and oversight?

Certainly consultants have shown up in droves to help companies use, implement, and govern AI. But I think those concerns are premature. The question becomes: how do we determine the risks? And if we know precisely what they are and we’re willing to accept them, are we set up well-enough to resolve them? Unfortunately, there’s no way to know what we don’t know when we begin.

Although there are many types and forms of AI, all represent some form of unknowable risk that must be managed before a decision to adopt the new capabilities:

– What are the types of risks involved?
– Who to include in decision making?
– Who will implement, govern, and maintain over time?
– What if customers lose trust in the company because of the inherent biases?
– Who will design, program, and implement?
– What will be the ‘cost’ – in money, resource, market share, personnel, loss of trust, governance? How do we know it’s worth it…before we begin?
– What’s the legal liability? How to manage, govern, and assess data accuracy, privacy, cybersecurity, misinformation management?

These are just a few of the risks. (See this paper for an exhaustive list of risks: https://www.energy.gov/ai/doe-ai-risk-management-playbook-airmp)

Until the ‘costs’ of bringing AI into a company are identified and accepted, the risks of adoption won’t be known until implementation:

  • if there’s a likelihood of data breaches and operation breakdowns,
  • if there’s no buy-in from all who will work with it,
  • if it causes problems with current technology,
  • if it plays havoc with the workforce and daily routines,
  • if there’s no way to assess the risks,
  • if it erodes trust with customers,
  • if biases, misinformation, or data breaches invade the use,
  • if there’s legal liability or ethical pitfalls,
  • if you can’t verify, govern, or manage the ongoing costs (resource/money),
  • if there’s no agreement on managing the ethics, verification, or regulations,

it may not be the right time to adopt it. So how will we know the risks before we make costly decisions? The simple answer is, we won’t.

NEEDS ASSESSMENT

I suggest that before deciding whether to bring AI onboard, which form or technology to choose, who will develop and implement, how to manage the cyber risks or deciding which is your best use case, I suggest you set up a structure to organize around shared risk.

Make sure that everyone who will touch the new capability is part of the solution or they’ll end up as part of the problem: they must have some voice in the final decisions, be aware of the risks to them and their jobs, and be willing to accept responsibility if the risks become problematic.

Begin by assembling a full set of representative viewpoints to gather data concerning team- or company-wide needs to assess if AI would be the best solution to some existing problems. Remember: without stakeholder buy-in for a use case, or teams in place for ongoing oversight, you’ll face the risk of resistance to add to the pile of other dangers.

Here are some questions to consider to help you decide:

  • Do we have problems that could be solved with AI? Why haven’t they been solved? (i.e. time/money; capability. This is important as the reasons they’re unresolved may trail the new implementation.). Are the risks of a new AI implementation higher than the risk of the problems remaining unresolved? Again, this is important to know.
  • Can problems be fixed inhouse or must we hire in? What are the costs involved with hiring in (time, money, disruption, ongoing governance, legal liability) and can existing folks be trained to maintain it?
  • Is there an oversight team to monitor, govern, implement, measure, assess, manage risks, check ethical standards, etc.?

Ultimately, although AI is ‘technology’, it’s a people problem. There must be broad agreement to generate a new offering or fix an unresolved problem with AI. And everyone must know what would change daily for them. Because of the security risks, operation breakdowns, data privacy and misinformation issues; due to the risks to daily work routines, potential job loses, corporate trust erosion, legal, and governance costs it’s a decision that goes beyond the tech folks or leaders.

MANAGING THE RISK

Take heart! There are markers that can help minimize the risk. I suggest companies address the following:

  • Generate rules and norms of use that match the company identity and values.
  • Know the tolerance for risk in terms of time, resource, reputation, and governance. Assemble a set of parameters that represent the risk factors for implementations you’re considering.

o  Customer ease/use
o  Implementation – internal or external
o  Job loss
o  Ethics
o  Privacy, cyber security, verification
o  National, corporate regulations and governance
o  Resource expenditure, cost of possible upheaval
o  Changes to corporate messaging
o  Etc. [Unique criteria to be decided within each company.

  • Agree to the goals and how they’ll be maintained, measured, and governed over time.
  • Manage buy-in – employees, customers. Assess customer’s acceptance.
  • Establish oversight team(s) including legal.
  • Iterate risks into project lifecycle. Checklists of suggested ways to mitigate.
  • Know the risks of failure. Agree that the risks are worth it.
  • The use must match the company strategy, not merely to use AI just to use AI.

Net net, without understanding and addressing the full set of risks involved with implementing the new, without having teams to give attention to the greatest risks as they appear, without legal, governance and measurement protections, unless there is broad stakeholder buy-in, the cost of adoption may be too high. AI is a great addition to a world of choice and possibility. But without managing the corporate risks, the downsides might outweigh the benefits.

Sharon-Drew Morgen
I'm an original thinker. I wrote the NYT Bestseller Selling with Integrity and 8 other books bridging systemic brain change models with business, for sales, leadership, communications, coaching. I invented Buying Facilitation(R) (Buy Side support), How of Change(tm) (creates neural pathways for habit change), and listening without bias. I coach, train, speak, and consult companies and teams who seek Servant Leader models.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here