Home Safety

Safety & Ethics

At Maruth Labs, we believe the safest AI is the one that respects your data sovereignty. Our safety philosophy is built on the architecture of privacy and control.

Principle 01

Privacy by Architecture

Traditional AI safety often relies on server-side filters. Our approach is fundamental: Data should not leave the device unless explicitly authorized.

By prioritizing Edge deployment (running Madhuram locally on your phone or server), we eliminate the most common AI safety risk: data leakage. When you use Madhuram in local mode, your prompts, context, and outputs are processed entirely on your hardware. Maruth Labs has zero visibility into your data.

Principle 02

Transparency & Hallucination

Small language models (SLMs) like Madhuram are efficient, but like all LLMs, they can hallucinate facts. We are committed to transparency regarding model capabilities.

We recommend Madhuram for specific task, rather than as a general-purpose model in production grade pipelines. Users should always verify factual claims generated by the model against trusted sources.

Principle 03

Dual-Use Mitigation

We actively filter our training datasets (including FineWeb-Edu and Cosmopedia) to remove harmful, toxic, or dangerous content before the model ever sees it.

Furthermore, Madhuram is a strictly closed-source model. By retaining exclusive control over the model weights, we prevent malicious actors from stripping away safety guardrails or fine-tuning the model for harmful purposes, such as generating disinformation, malware, or non-consensual content.

Report a Vulnerability

If you discover a safety vulnerability or a way to bypass our safety training, please report it directly to our engineering team. We prioritize safety patches over feature releases.

[email protected]

Questions about Safety?

We're committed to responsible AI development. Reach out to discuss our safety practices.

Contact Safety Team