How to Leverage Retrieval Augmented Generation (RAG) in the Enterprise: Organization Readiness

0
270

Share on LinkedIn

Source: iStock

Retrieval Augmented Generation (RAG) uses the data of the enterprise as a source of truth, rather than relying on the knowledge embodied in a large language model (LLM), which is not enterprise specific. I have described digital maturity and readiness in a prior article, including the four critical areas for AI readiness:

  • Effective selection of scope and use cases.
  • Evaluation of knowledge quality including structure, fitness to purpose and metadata enrichment.
  • Establishment of baseline metrics to demonstrate measurable success.
  • Ongoing governance and decision making.

I want to delve into the necessary elements for successful RAG deployment in the enterprise, which is part of the process of successfully testing and deploying ChatGPT types of applications.

Let’s dig into more detail around the elements required for organizational readiness. These elements fall into three broad categories: organizational alignment, data readiness, and technical readiness.

Organizational Alignment

Use Case Definition Maturity

Defining the correct use cases is essential to AI success. They need to be specific, unambiguous and verifiable. “Using an LLM for customer support” is not specific enough. How will that be tested? You need concrete examples of functionality such as “Use RAG to answer troubleshooting questions about our consumer routers for non-technical users.” Even then, the specific questions need to be articulated so the function can be verified as pass or fail.

Measurable Success Criteria

Equally important are the metrics for supported processes. What are the baselines? For customer support there are many – time per incident, first all resolution and many others. Every use case needs to identify how success will be measured – how the process is instrumented, the current baseline and targeted outcomes,

Leadership Strategy Support

Leadership needs to be on the same page with regard to vision, understanding functions, capabilities, use cases, measures, investment, and organizational attention. Managing expectations through a proof of value not a proof of concept. According to Accenture, 90% of AI projects never go past Proof of Concept. In other words, they are not deployed to production. 

Data Science Team Maturity

Are in-house individuals available who understand how the LLM needs to be fine-tuned and how knowledge and content need to be organized? Are these individuals on board with the process use cases and objectives? Do they understand what knowledge is required and the role of a knowledge graph? Many tech organizations still believe only in a pure data and algorithm-based approach and do not appreciate the role of information architecture applied to content and knowledge.

Intellectual Property Security Plan

How will IP be ingested and protected? This includes curation and alignment with regulatory requirements, privacy issues and security of trade secrets. RAG can mitigate leakage of IP; however, the organization needs a comprehensive plan as to how this is accomplished.

Data Readiness

Product Data Maturity

How well is product data quality understood, managed and measured? Good product data is necessary for a smooth ecommerce experience and necessary for product associations and relationships as well as improved product recommendations. Product data scorecards cut across multiple dimensions and need to be monitored for ongoing improvement.

Content Maturity

Content operations determine how content to support customers and employees is managed throughout its lifecycle. As with product data, content hygiene is essential and will be the source of truth behind question answering and support chatbots. Metrics and scorecards with intentional content remediation and componentization are table stakes for RAG.

Customer Data Maturity

Are customer attributes captured and normalized with consistent descriptors across touchpoints? A customer identity graph will be a core part of personalization and contextualization. Customer data metrics and KPIs are monitored throughout the customer lifecycle.

Governance Maturity

Are various elements and components of the enterprise information ecosystem governed through clear policies and procedures with operational metrics that are monitored with course corrections if they are out of range? A metrics-driven governance playbook provides a mechanism to ensure the return on investment in new initiatives, to allocate investments appropriately, to ensure accountability and to tie various programs and initiatives together. Proper governance also ensures that projects are coordinated and practices are harvested and shared with less duplication of effort and wheel reinvention.

Technical Readiness

System Integration Readiness

Part of readiness includes understanding what systems are required and how integrations will be achieved to support use cases. This aspect of readiness also includes real-time data availability, throughput, and fault tolerance if mission critical applications are part of the LLM solution.

Technical AI System Support

Support systems need to be in place to handle user and customer issues within established metrics and service-level agreements. The technical team also needs to understand the nuances of commercial versus open source LLMs and how they can be fine tuned, how content is ingested into a vector space for retrieval, how to enrich embeddings (the ingested content) with additional contextual clues provided by metadata. They also need to understand the costs associated with LLMs and how to control those costs. 

Training and Change Management Plan

Deployments will be successful only if people understand them and trust the data and results. Achieving this goal requires intentional change management and socialization across direct and indirect stakeholders. It requires engagement with influencers and social capital in the organization. The business requires training programs and support plan to ensure a successful rollout.

Are You Ready?

These are the elements for successful deployment of LLM-based RAG programs. RAG will have the biggest impact of any AI program according to Forrester[1] (and other analysts). The time to get your data and information architecture house in order is now. Organizations need to move quickly and boldly in this space or risk being left behind by the ones who do.

Notes:

[1] Forrester Tech Conference, Austin Texas, September 2023 and https://www.zdnet.com/article/generative-ai-is-forcing-companies-to-think-big-and-move-fast/

ADD YOUR COMMENT

Please use comments to add value to the discussion. We will not publish brief comments like "good post" or comments that mainly promote links. All comments are reviewed by moderator before publication.

Please enter your comment!
Please enter your name here