top of page

Generative AI: How businesses can sift through the chaff

Generative AI and LLMs have been the talk of town in the past 15 months, more so since OpenAI launched its ChatGPT service in December 2022. Common people have started using LLMs to do homework, creatively write (emails, articles and stories) and create and edit images , audio and videos.


GenAI has crept its way into business usage much for the same reasons and then some more (e.g., to assist in writing/ improving and testing software code, in generating insights from tables and spreadsheets, and in extracting insights from ,or writing summaries of, large document repositories).


Company boards have swung into action and approved billions of dollars to be spent on proofs of concepts (PoCs) for use cases proposed by their teams (and by well-paid external consultants). CXOs have set up 'Innovation teams' and 'GenAI COEs' to execute these PoCs and to take the feasible ones to production.


Some companies (including a client of mine) have been so caught up in the buzz that multiple BUs/functions are developing different variations of the same use case, independently!


Many companies are finding that despite the initial buzz, many of the GenAI use cases are not living up to their promise when the 'shiny pilots' are deployed to production.



Source: OpenArt



Based on my experience, there seem to be several potential reasons for this:

  1. Lack of a clear business case: In the hurry to develop a shiny GenAI solution, the executive team often fails to articulate the detailed financial business case for the solution:

    1. How much employee time and cost will the tool save?

    2. If it adds customer functionality, what value will it drive through additional revenue or customer delight?

    3. What are the operating costs and the ongoing maintenance costs for the GenAI tool?

  2. Operating Cost: When a wider user base (all employees, or all customers) are given access to the tool, the usage of the LLM model and the associated vector database increase exponentially -- leading to higher than anticipated costs. The wider user base is not obliged to use the shiny new tool in a limited, controlled way -- unlike the pilot development team.

  3. User oppostion/ rejection: The Generative AI tool faces opposition from the BU/ function employees if they perceive that the AI tool will take away their jobs. They may oppose the tool by pointing out its lacunae rather than helping in improving the tool. This leads to inderutilization and failure of the tool.

  4. Development/ maintenance costs: Some businesses have developed custom LLM models on open source algorithms, anticipating the large volume of inferencing that they will require. As researchers come up with newer state of the art (SOTA) algorithms, the development teams are inclined to keep updating the underlying algorithm in their custom model. They are realizing that this requires a lot of effort, disrupting their original RoI estimation.

  5. Security and privacy issues: Businesses are wary of using GenAI cloud services that may expose their proprietary documents, data and software code to external LLM algorithms.

  6. Integration with existing systems: In some instances, the LLM tool is developed in isolation of the existing systems or are commercially available off-the-shelf (COTS) solutions. The outputs generated by the tool may not be well integrated with the existing IT systems. Manual effort needs to be spent to incorporate the LLM outputs at scale. This disrupts the RoI estimation.


To avoid these pitfalls, GenAI projects should go through the following explicit steps:

  1. Ideation across BUs and functions: this can be done by leveraging external consultants, or internal brainstorming, or a hackaton (or a combination of these)

  2. Shortlisting of viable and high RoI ideas (based on high-level RoI estimation) for piloting (or PoCs) -- with the involvement of the executive team at some level

  3. Development of low-cost, low-risk pilots/ PoCs -- involving a joint team of developers from the Innovation team/ AI COE and business managers from the BU/ function where the tool will get deployed (Rule of thumb: GenAI PoCs should not cost more than $20k each to build)

  4. Development of detailed business cases for the productionized GenAI tool -- in more detail than the high level RoI estimation used to select the pilots:

    1. What are the benefits of the tool:

      1. Cost savings due to employee effort reduction (adjusted for manual effort needed to learn the tool, integrate it with existing systems and use it in our daily business process)

      2. Customer value: how does this tool move the pointer on customer experience? How will it increase customer delight and reduce customer attrition? What is the breakeven level of increase in revenue needed to justify the costs of the GenAI tool?

    2. What are the costs of the tool:

      1. Development costs including the costs of external consultants and internal employees to convert the GenAI pilot to a tool, and the costs of developing embeddings for our corpus of documents/ images/ audio/ video involved

      2. Operating costs including the additional cloud infrastructure costs (GPUs ifor inferencing, vector database storage, etc) and the cost of tokens on cloud-based LLM models (if applicable). Anticipate mis-use scenarios and how to prevent them -- What works well is to set individual user limits on the usage of tokens on cloud-based LLMs using API sub-keys (and keep revisiting those limits if users are hitting them).

      3. Maintainence (MLOps) cost: If the GenAI tool involves custom models, one must calculate the cost of re-training the models (engineer costs, GPU costs, etc)

      4. Employee training costs

  5. Approval of viable PoCs for production -- with explicit CXO involvement.

  6. Change Management: At this stage it is useful to have a discussion with the employees of the BU/ function where the GenAI tool will deployed to address their apprehensions and doubts.

  7. Build the GenAI tool within the business IT infrastructure using open source LLM models; or use trusted cloud LLM services from reputed companies that have publicly committed not to use user data for training models -- to overcome security/ privacy concerns.



If you want to discuss what goes into the business case for typical use cases (e.g., knowledge management Q&A tool, document summarization tool, marketing personalization, content generator, etc.,), please feel free to reach out to us.


93 views0 comments
bottom of page