Building Indic Tokenizer: A Cost-Effective Approach

July 31, 2025
10 min read

The Challenge of Multilingual Tokenization

Creating effective tokenizers for multiple languages, especially those with different scripts like English and Indic languages, presents unique challenges. This article documents the journey of building our "Indic Tokenizer", a multilingual tokenizer supporting English, Hindi, Bengali, Kannada, Tamil, Punjabi, and Telugu - from expensive failures to an efficient, cost-effective solution.

Multilingual tokenization is particularly challenging because different language families have vastly different characteristics. While English uses straightforward alphabetic characters, Indic languages employ complex scripts with conjuncts, combining marks, and context-dependent character variations. These differences mean that tokenization strategies effective for one language family often fail catastrophically for others

Failure #1: ByteLevel Tokenization Falls Short

The first approach seemed logical: use ByteLevel tokenization, which works well for English by treating text as raw bytes. However, this approach proved disastrous for Indic languages

The Problem: Indic scripts have complex character compositions with combining marks, conjuncts, and multi-byte UTF-8 representations. ByteLevel tokenization fragments these meaningful linguistic units into meaningless byte sequences, destroying the semantic structure that's crucial for these languages. For example, a single Devanagari conjunct character could be split into multiple byte-level tokens, making it impossible for models to learn proper word boundaries or morphological patterns

This failure highlights a fundamental issue in cross-linguistic NLP: what works for morphologically simple languages like English often breaks down when applied to morphologically rich languages. The byte-level approach, while elegant for ASCII-based languages, ignores the linguistic reality of complex writing systems

Result: Poor tokenization quality with high fertility rates (too many tokens per word) and loss of linguistic meaning.

Failure #2: The Data Deluge Disaster

Learning from the ByteLevel failure, the second attempt took a different approach: throw more data at the problem. This reflects a common misconception in modern NLP that more data automatically leads to better results.

The Approach:

The Problems:

This failure demonstrates the "data fallacy" - the assumption that poor model performance can always be solved by adding more training data. In reality, noisy, unfiltered data often hurts performance more than it helps, especially in multilingual settings where data quality varies dramatically across languages.

Key Insight: More data doesn't automatically solve tokenization quality issues. The problem wasn't quantity - it was approach and data quality.

Success #1: Small-Scale Breakthrough

Frustrated with expensive failures, the third attempt took a minimalist approach inspired by research showing that careful data curation outperforms brute-force scaling.

The Pivot:

Result:

This success validates the "less is more" principle in machine learning. By focusing on two languages and applying strict quality filters, the tokenizer could learn meaningful patterns rather than memorizing noise. The dramatic reduction in hardware also demonstrates that effective NLP doesn't require massive computational resources when approached thoughtfully.

The Revelation: Quality data curation and appropriate vocabulary sizing matter more than brute-force scaling.

Success #2: The Indic Tokenizer

Building on the small-scale success, the final iteration expanded thoughtfully, applying lessons from multilingual tokenization research.

Technical Specifications:

Data Engineering Strategy

The key was intelligent data curation rather than volume. This approach draws from research showing that data quality is more important than quantity for multilingual models. The filtering process removed low-quality text, ensuring each training sample contributed meaningful linguistic information. The system implemented strict validation criteria: minimum text length requirements, character validity checks based on Unicode ranges, and emoji/unwanted character removal. This aggressive filtering reduced data volume but dramatically improved quality, following principles established in recent multilingual NLP research.

Language Balancing

Instead of equal representation, languages were rebalanced based on tokenization complexity. This reflects linguistic reality: some languages require more tokens per word due to their morphological complexity. Bengali, for example, should be given a much higher multiplier due to its complex script and extensive use of conjuncts, while English should receive an almost negligible multiplier.

This balancing strategy addresses a critical issue in multilingual tokenization: naive equal sampling often underrepresents morphologically complex languages, leading to poor performance on exactly the languages that need the most attention.

Tokenizer Architecture

The tokenizer used BPE (Byte-Pair Encoding) with byte fallback for robustness. This combination provides the linguistic awareness of BPE while maintaining the fallback capability to handle any Unicode character through byte-level encoding. The preprocessing pipeline included punctuation isolation, pattern-based splitting, digit handling, and Metaspace processing. The comprehensive initial alphabet included individual characters.

Comprehensive Performance Comparison

Our tokenizer's performance was evaluated against three established multilingual tokenizers: Gemma-3, TWO AI’s SUTRA, and Sarvam’s Sarvam-1. The evaluation used the FLORES development dataset across all seven supported languages.

Fertility Analysis (Tokens per Word)

The fertility metric measures tokenization efficiency - lower values indicate more efficient tokenization with fewer tokens needed per word.

LanguageIndic Tokenizer (Our)SUTRASarvam-1Gemma-3
English1.351.141.431.28
Hindi1.471.461.41.43
Punjabi1.551.251.682.87
Bengali1.711.852.071.72
Telugu2.092.232.142.88
Tamil2.162.282.172.42
Kannada2.242.472.373.33

Table 1: Fertility Rate comparison of our Indic Tokenizer with SUTRA, Sarvam-1, and Gemma-3 on Flores dataset.

Key Findings:

The fertility results show our tokenizer performs particularly well on morphologically complex languages like Kannada, where it achieves a 33% improvement over Gemma-3. This reflects the success of the language-aware balancing strategy and careful initial alphabet design.

Out-of-Vocabulary (OOV) Analysis

All four tokenizers achieved perfect coverage with 0% OOV rates across all languages, indicating robust vocabulary coverage for the test datasets. The zero OOV rate is particularly significant, as it indicates the tokenizer can handle any text in the supported languages without encountering unknown tokens - a critical requirement for production systems.

Sequence Length Efficiency

Normalized sequence length comparison against SUTRA baseline shows relative efficiency:

LanguageIndic Tokenizer (Our)SUTRASarvam-1Gemma-3
English1.1871.0001.2971.128
Hindi1.0071.0000.9930.979
Punjabi1.2341.0001.3732.286
Bengali0.9271.0001.1250.932
Telugu0.9361.0000.9791.290
Tamil0.9461.0000.9641.058
Kannada0.9101.0001.0051.351

Table 2: Normalized sequence length comparison of our tokenizer, Sarvam-1, and Gemma-3 with SUTRA as baseline on Flores dataset. (Lower to 1 is better)

Our Indic Tokenizer demonstrates excellent sequence efficiency with mean normalized length of 1.021 (closest to SUTRA's 1.000 baseline) and low standard deviation (0.134).

Overall Model Comparison Summary

ModelAverage FertilityFertility Standard DeviationAverage OOV RateAverage Sequence LengthSequence Length Standard Deviation
Indic Tokenizer (Our)1.79600.3640.0%1.0210.134
SUTRA1.8110.5360.0%1.0000.000
Sarvam-11.8940.3870.0%1.1050.167
Gemma-32.2750.8030.0%1.2890.466

Figure 1: Overall comparison of our tokenizer, SUTRA, Sarvam-1, and Gemma-3 on Flores dataset.

Our tokenizer leads in key metrics:

Vocabulary Distribution Analysis

The final tokenizer achieved optimal distribution reflecting both usage frequency and linguistic complexity:

LanguageTokensPercentage
English20,27530.9%
Bengali9,12813.9%
Kannada7,91912.1%
Hindi7,38111.3%
Telugu7,19211.0%
Tamil6,61410.1%
Punjabi5,7558.8%
Numbers6671.0%
Punctuation2830.4%
Other3070.5%

Figure 2: Vocabulary distribution across languages and character types.

This distribution reflects the rebalancing strategy while maintaining reasonable representation for all languages. English maintains the largest share due to its role as a lingua franca, but each Indic language receives substantial representation proportional to its tokenization needs.

Hardware Analysis

The journey from failure to success shows dramatic hardware improvements:

AttemptData SizeComputeTime
Failure #2170 GB64 vCPUs11 hours
Success #1Small4 vCPUs5 min
Success #2950 MB4 vCPUs22 min

Table 3: Comparison across development iterations.

Hardware reduction: 16x lesser hardware than the failed attempt while achieving superior quality compared to established tokenizers. This dramatic reduction has important implications for the democratization of multilingual NLP. By reducing hardware needs from 64 vCPUs to 4 vCPUs, the approach makes multilingual tokenizer development accessible to researchers and organizations with limited computational budgets.

Key Lessons Learned

1. Quality Over Quantity

Aggressive data filtering and curation produced better results than massive, unfiltered datasets. This aligns with recent research showing that careful data curation can match or exceed the performance of much larger, noisier datasets.

2. Language-Aware Balancing

Understanding each language's tokenization complexity enables better training balance. This goes beyond simple corpus statistics to consider the underlying linguistic properties that affect tokenization efficiency.

3. Smart Initial Vocabulary

Including morphologically meaningful subwords in the initial alphabet improves convergence. This reflects the importance of linguistic knowledge in designing NLP systems, rather than relying purely on statistical learning.

4. Iterative Development

Small-scale experimentation enabled rapid iteration and learning without massive costs. This methodology allows for quick hypothesis testing and reduces the risk of expensive failures.

5. Hardware Efficiency

Proper data engineering eliminates the need for expensive compute resources. This demonstrates that thoughtful algorithm design can often substitute for brute computational force.

Technical Implementation

The complete implementation demonstrates several best practices for multilingual tokenizer development:

Broader Implications

The success of our tokenizer has several important implications for the field of multilingual NLP:

Accessibility: By reducing costs dramatically, the approach makes multilingual tokenizer development accessible to smaller research groups and organizations in developing countries where computational resources are limited.

Sustainability: The reduced computational requirements align with growing concerns about the environmental impact of large-scale NLP training, demonstrating that effective models don't always require massive resource consumption.

Linguistic Equity: The language-aware balancing strategy addresses issues of linguistic bias in multilingual models, ensuring that morphologically complex languages receive adequate representation.

Conclusion

Building effective multilingual tokenizers doesn't require massive resources or datasets. The tokenizer demonstrates that thoughtful engineering, quality data curation, and iterative development can produce superior results compared to established tokenizers at a fraction of the cost.

Final specifications:

The key insight: successful multilingual NLP isn't about having the most data or compute - it's about understanding the languages, curating quality data, and engineering solutions that respect linguistic diversity. This approach not only produces better results but also democratizes access to multilingual NLP capabilities.

The development journey illustrates a broader principle in machine learning: constraints often drive innovation. By being forced to work with limited resources, the project discovered more efficient approaches that ultimately outperformed resource-intensive alternatives.


For collaboration opportunities, or technical questions, please reach out to our team. Together, we can make lightweight, accessible language models a reality for everyone.
Contact us at [email protected].