Is SLMs AI's Next Game Changer?
While the AI industry remains heavily focused on Large Language Models (LLMs), a growing contingent of experts believes Small Language Models (SLMs) could soon take center stage.
As tech companies ramp up investments in generative AI during the bustling holiday season, the conversation is shifting toward the need for tailored solutions over monolithic systems.
While LLMs excel at handling diverse tasks, they are often resource-intensive, posing challenges in cost, energy consumption, and computational demands.
According to AWS CEO Matt Garman, there is already significant interest from businesses leveraging generative AI powered by models like Anthropic.
Yet, LLMs, while dominant in some areas, are not a universal fit.
Teradata CEO Steven McMillan champions SLMs as the future of AI, emphasizing their ability to deliver task-specific outputs.
McMillan expressed:
“As we look to the future, we think that small and medium language models and controlled environments such as domain-specific LLMs, will provide much better solutions.”
SLMs are purpose-built, trained on specific datasets to provide highly relevant insights while ensuring data privacy.
Unlike their larger counterparts, they optimise energy and computing resources to scale according to a project's needs, offering a cost-effective and efficient alternative.
For businesses requiring deep expertise in a particular domain, specialised models—whether domain-specific LLMs or SLMs—provide unparalleled accuracy. For instance, a model tailored for a Chief Marketing Officer (CMO) versus one for a Chief Financial Officer (CFO) delivers highly contextual and relevant insights within its field of training.
As the AI landscape evolves, the rise of SLMs highlights a growing demand for precision, efficiency, and scalability—qualities that could redefine the future of intelligent systems.
Are SLMs a Better Option?
The Association of Data Scientists (ADaSci) estimates that developing a SLM with 7 billion parameters to serve one million users would require only 55.1 MWh (megawatt hours) of electricity.
In contrast, training GPT-3, with its 175 billion parameters, consumed a staggering 1,287 MWh, not accounting for the energy used when the model is actively deployed.
This means that an SLM uses just about 5% of the energy required by a LLM.
LLMs, due to their massive computational demands, are typically run on cloud servers, which often means companies lose control over their data and face slower response times as information travels over the internet.
As businesses increasingly adopt AI, it is clear that one size will not fit all.
Companies will prioritise efficiency, cost-effectiveness, and minimising data transfer to the cloud by choosing the most suitable model for each project—whether that's a general-purpose LLM or a smaller, more specialised model.
In this evolving landscape, efficiency and accuracy will be paramount.
McMillan said:
“When you think of training AI models, they must be built on the foundation of great data.”
He added:
“That is what we are all about, providing that trusted data set and then providing the capabilities and analytics capabilities so clients, and their customers, can trust the outputs.”
Smaller, domain-specific LLMs offer an attractive alternative, enabling businesses to harness the power of AI without the heavy resource costs and complexity of larger models.
As AI becomes central to business decision-making, selecting the right model for each task will be critical, ensuring both reliable and scalable results.