
Revolutionizing AI with Cost-Efficient Micro Language Models
Major IT companies are leading a shift towards micro and small language models, offering a cost-effective and efficient alternative to large AI models. Tailored for specific industries such as banking and cybersecurity, these models provide faster responses and lower operational costs by using domain-specific data, marking a significant step in modernizing legacy systems and accelerating AI adoption.
Revolutionizing AI with Cost-Efficient Micro Language Models
In a significant move within the IT sector, major firms such as Infosys, HCLTech, and Tech Mahindra are spearheading the development of micro and small language models (MLMs and SLMs). These specialized models are engineered to deliver faster response times and lower operational costs, proving especially effective for applications with low-to-medium complexity.
Tailoring AI for Specific Sectors
These new models are optimized using proprietary client data combined with standard industry information. This targeted approach allows companies to build solutions for specific functions in sectors like banking, cybersecurity, and telecommunications. For instance, Salil Parekh, CEO of Infosys, highlighted that the company has developed four distinct language models dedicated to banking, IT operations, cybersecurity, and broader enterprise needs. By leveraging such customized data, these models are able to outperform or match the output quality of larger language models while demanding far fewer resources.
Cost-Performance Advantages
HCLTech’s CEO, C. Vijayakumar, emphasized the cost performance as a key differentiator in the current AI landscape. By deploying small and very specific models, the overall expenditure is significantly reduced—a factor that is expected to accelerate adoption, particularly in legacy modernization programs. Similarly, Tech Mahindra has shifted its focus from large language models (LLMs) to developing SLMs and tiny language models, which better address niche problems without heavy compute or carbon usage.
Addressing Sector-Specific Challenges
Within regulated and compliance-heavy industries, such as banking and telecommunications, SLMs offer a promising solution. While traditional LLMs process massive volumes of publicly available data—sometimes resulting in inherent biases—SLMs are trained on refined, domain-specific data. This precision not only enhances accuracy but also makes these models an attractive proposition for IT service providers looking to modernize digital platforms amid rising hardware costs.
As Abhigyan Malik from the Everest Group noted, increasing hardware prices and the demand for tailored solutions have driven IT firms to invest in SLMs. With costs for using even conversational AI models having dropped by more than 85% since early 2023, enterprises are now better positioned to implement these models across various operations, thereby advancing AI-led growth.
Building and Customizing SLMs
Infosys has been at the forefront by constructing these models with a mix of client-specific and generic industry data. Clients have shown increasing interest in deploying their own small language models, and Infosys has capitalized on this trend by offering a robust platform for customization. Analysts observe that the development of these leaner models, whether by scaling down existing LLMs or leveraging open-source technologies, is set to become a defining trend following advancements in GPUs and AI PCs.
By optimizing cost and specificity, the rising adoption of micro and small language models is not only reshaping the AI landscape but also creating a win-win scenario for both IT service providers and their clients.
Note: This publication was rewritten using AI. The content was based on the original source linked above.