
Fractal Analytics Ushers in India's AI Future with Landmark Reasoning Model
Fractal Analytics, a pioneering AI firm headquartered in Mumbai, is setting the stage for a major leap in artificial intelligence within India. In an ambitious proposal to the IndiaAI Mission, the company has outlined plans for creating the nation’s first large reasoning model (LRM), a move that underlines India's intent to be at the forefront of advanced AI reasoning.
Proposal Overview
The initiative involves building a series of models designed to tackle complex problem-solving and decision-making tasks. Fractal’s project aims to develop three versions:
- Small model: Ranging between 2 to 7 billion parameters.
- Medium model: Between 20 to 32 billion parameters.
- Large model: A state-of-the-art model with 70 billion parameters trained on up to 1 trillion tokens.
The overall project is budgeted at Rs 118.8 crore, of which Rs 76.6 crore is being sought as external funding from the government.
Funding and Technical Scope
In a significant push towards post-training reasoning, Fractal's founder Srikanth Velamakanni emphasized the need to move past conventional pre-trained models. Velamakanni stated that India must concentrate on models that can not only process data but also "think and reason" to solve real-world problems. This strategic shift is intended to keep India competitive against global AI giants like those in the US and China.
The proposed LRMs, a specialized branch of large language models (LLMs), are engineered to excel in reasoning tasks that require intricate planning and problem-solving capabilities. The models are envisioned to augment pre-trained systems, enabling them to address complex challenges more effectively.
Advancing Reasoning Capabilities through Innovation
Fractal Analytics plans to build these reasoning capacities using open-source LLMs with permissive licenses, integrating them with locally developed Indian language models. This effort is not just about scale—it’s about enhancing reasoning efficiency at inference time, a cost-effective strategy to overcome the performance plateaus seen when scaling training-time compute.
Velamakanni boldly remarked, "The era of pre-training is over. The race for better AI is now focused on building systems that can work with pre-trained models and accomplish complex real-world tasks." His vision signals a paradigm shift that could accelerate India’s journey towards artificial general intelligence (AGI).
Competitive Landscape and Future Prospects
The planned model by Fractal Analytics is set to outpace current global reasoning models. For context, while OpenAI's o1 and o3 models have gained attention, the largest reasoning model today is DeepSeek R1 with 671 billion parameters. Fractal’s LRM series has the potential to reposition India as a key contender in the field of advanced AI.
In tandem with model development, Fractal Analytics is also preparing to release a diverse dataset sourced from premier examinations across India—including JEE Advanced, NEET-PG, National Olympiads, CAT, and GATE. This dataset will be instrumental in honing the model's proficiency in STEM, coding, medical, and agentic systems.
A Strategic Leap Towards AGI
With 187 applications received in two bidding rounds for building sovereign AI foundation models, the competitive environment is intense. The Ministry of Electronics and Information Technology (MeitY) has indicated plans to approve several applications by the end of the month. Amidst this backdrop, Fractal's endeavor not only fortifies India's position in AI but also beckons the dawn of a new era in reasoning-enabled technologies.
This project, by marrying advanced reasoning with large-scale data and cutting-edge compute strategies, sets the stage for what could be a transformative leap towards AGI in India.
Note: This publication was rewritten using AI. The content was based on the original source linked above.