Debunking 6 Generative AI Myths Including “Bigger is Better”


Share on LinkedIn

By now it has probably been said a gazillion times that generative AI has taken our world by storm. What hasn’t been said that often is that this storm has found almost every business software vendor flat-footed and utterly unprepared. Mind you, the demand for more efficiency and effectiveness was there, but it couldn’t be fulfilled meaningfully. Consequently, all vendors scrambled to announce and deliver some type of generative AI functionality — meaningful or not.

The good news is that this has changed. In the meanwhile, vendors are delivering some valuable generative AI business use cases that go beyond the writing of emails or low- to mid-end marketing content so that it can be blasted out even faster than ever before.

The bad news is that there still is no real understanding in businesses of what it takes to implement and use AI-based systems, let alone how to leverage their capabilities while acknowledging limitations. And vendors aren’t helping them to understand this. Neither are most of the SIs.

To address this to some extent, I recently published an article that aims at setting businesses up for success by undergoing a realistic assessment of how AI ready they are instead of getting caught up by the promises of some myths. Generative AI, let alone AI in general, needs to be addressed strategically.

So, let’s bust some of these myths.

1. Bigger is better

Conventional wisdom (and vendor marketing) leads us to the assumption that bigger LLMs are better than smaller ones. However, there is a growing body of research, including this, this, this, this, or this, that leads to another conclusion. The law of diminishing returns applies. Large models need more computational resources, which slows them down if these resources are not available; typically, larger models are slower than smaller ones. While the model size to some extent determines accuracy, it is regularly possible to prune models without losing much (or any). Instead, prompts, training/fine-tuning, and use cases matter a lot.

Recommendation: Don’t fall for the bigger is better narrative but let vendors show results for your use cases before committing.

2. The more data the better

Data is the foundation of a well-working AI. And, admittedly, it needs a lot of data for an AI to deliver reasonable results. This is one of the core reasons that predictive systems regularly ask for a minimum of records in the database before they get active at all. Yet, the amount of data is only one piece of the puzzle, actually only a small piece. No amount of data is helpful if it is not governed properly — before it is used to run an AI. Data needs to be “clean”, consistent (i.e., free of contradictions or double entries), and reasonably describe the possible situations that can arise in the solution scope of the AI system. This reasonable description also means that its distribution is statistically matching reality. This is also the core reason for the requirement to have enough data.

Or else!

Recommendation: Before embarking on an AI project, get your data in order and make sure you have data governance and processes with owners to keep data clean, in place. If you cannot run a data project before the AI project, make sure that the data project is an early part of the AI project.

3. All you need is a Large Language Model (LLM)

OpenAI created a lot of buzz.

And I mean, A. LOT.

This lets us forget that there was AI before it became generative. While a generative AI can be used for a lot of things, it is not always — or rather often not — the best tool to be thrown at a problem. Many analytical or other problems can be solved with narrow, purpose-oriented, domain-specific models, small language models, or medium language models. These models are regularly faster, more accurate, better understood, and more efficient than a large language model.

Recommendation: Before you replace your existing AI systems with a new generative AI, consider it from various angles including cost efficiency. You might not need the power of an LLM, and your use cases might not even cater to an LLM’s strengths

4. Generative AI improves peoples’ performance

The big promise of generative AI is that the provision of advanced tools helps people become more productive, or in general, increase their performance on the job. While this is often true, a lot depends on the use case and the organization’s confidence and resistance to change. People might not understand well enough how to efficiently use this tool. Even employees’ expectations do matter. If they are too high and/or the quality of the AI’s output is not high enough, this leads to frustration and rejection. This can also lead to an increased — instead of decreased — workload, caused by exception handling and the need for supervising the AI.

Recommendation: Deploy generative AI with comprehensive training programs, ensuring employees understand its capabilities and integration into workflows. This empowers them to leverage AI effectively, enhancing productivity while minimizing disruptions and maximizing the tool’s benefits.

5. Generative AI is “fire and forget”

The ubiquity of tools as diverse as ChatGPT,, Opus Clip, and the various copilots that are offered suggest that it is possible to just use the tools without the need for further maintenance. The models need to be kept up to date with new data to stay relevant, there is a continuous need for governance, not only for ethical or legal considerations, their performance needs to be monitored, new use cases need to be supported, and new systems and workflows are established in the business. These and many more topics need to be looked at continuously.

Recommendation: Develop an AI- and an AI governance strategy including the relevant policies; set up a monitoring framework for the performance of the AI and train your people to use the system effectively.

6. Generative AI gives a lasting competitive advantage

Striving for increased efficiency, businesses often implement generative AI to speed up processes and to “do more with less”. While they probably achieve these objectives, this does by no means translate into a lasting competitive advantage, if any at all. For a starter, the technology advances too fast and the barrier of entry for the competition is as low as for the own business.

To translate technology into a lasting advantage, it needs more than the mere deployment of technology to let it loose on the same old process. The adage “a fool with a tool remains a fool” applies. Developing a competitive edge means adapting processes to leverage increased automation, making sure of high data quality, and effectively and efficiently integrating AI into business systems. Plus, as skilled people are scarce and mobile, it is very important to establish and maintain a corporate culture that attracts people.

Recommendation: Don’t just deploy a generative AI system but make sure that it is embedded in an efficient process landscape and supported by a culture that does not create fear in employees but supports the work with this technology. Competitive advantage results from culture, a superior strategy, and execution. Technology is merely part of the execution.

Overcoming these and other myths is important for the successful rollout of meaningful AI capabilities in your business. This is best done by undergoing an AI readiness assessment as I described it in my article how to assess your AI readiness with 50 questions.

Thomas Wieberneit

Thomas helps organisations of different industries and sizes to unlock their potential through digital transformation initiatives using a Think Big - Act Small approach. He is a long standing CRM practitioner, covering sales, marketing, service, collaboration, customer engagement and -experience. Coming from the technology side Thomas has the ability to translate business needs into technology solutions that add value. In his successful leadership positions and consulting engagements he has initiated, designed and implemented transformational change and delivered mission critical systems.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here