Home About

Intelligence shouldn't be heavy.

We are building the physics of efficient intelligence. Our goal is to decouple AI capability from computational mass.

The Mission

For the last decade, the AI industry has been obsessed with scale. The prevailing logic was simple: more parameters, more data, more GPUs. While this created impressive capabilities, it also created a centralization of power.

At Maruth Labs, we take the inverse approach. We ask: "What is the minimum architectural complexity required to achieve reasoning?"

We build Small Language Models (SLMs) that punch above their weight class. By optimizing the fundamental mathematics of attention and positional embeddings (Project Perseus), we bring server-grade intelligence to edge devices.

The Name

"Maruth" is derived from the ancient concept of the wind—fast, invisible, and ubiquitous.

This reflects our engineering philosophy. Good infrastructure should be like the wind: you feel its impact everywhere, but you never see the machinery. Our models are designed to be light, fast, and to move seamlessly between the cloud and the device in your pocket.

The Lab

Headquartered in Delhi, India, we are a multidisciplinary team operating at the intersection of linguistic theory, hardware optimization, and high-performance computing.

We believe research moves faster when it's shared. We publish our failures, benchmarks, and updates in real-time to accelerate the collective understanding of efficient intelligence.

Delhi
Research HQ
X / Twitter
Follow Updates
LinkedIn
Company Profile
Proprietary
Architecture

Join the Search

We are a small, high-density team. We don't have product managers or endless meetings. We just have builders.

If you are obsessed with challenging the standard norms, we want to hear from you.

Send us your work