\n\n\n\n\n\n\n
Google has launched TranslateGemma, a new open AI translation model built on Gemma 3

Ever thought you could find a single translation tool that works across devices and supports 55 languages? Google has introduced TranslateGemma, a new family of open translation models built on its Gemma 3 platform.

TranslateGemma is available in three versions: 4B, 12B, and 27B parameters, bringing the capabilities of much larger systems into smaller, more efficient models that can run across a range of hardware.

Smaller Models Doing Bigger Work

For developers, this means high-quality translations can now be achieved using models with less than half the parameters. Leading to lower latency, reduced computing costs, and better performance on everyday hardware. The smallest 4B model also delivers strong results, matching the quality of earlier 12B models and making it suitable for mobile and edge devices.

Furthermore, according to Google’s internal testing, the new model performed well, despite its reduced size.

Diving deep into the specifications: Google tested TranslateGemma on the WMT24++ dataset, which includes 55 languages spanning high-, medium-, and low-resource language families. Across all categories, TranslateGemma showed a lower error rate compared to baseline Gemma models. Indicating a higher quality consistently.

Tested Across 55 Languages

Google evaluated TranslateGemma using the WMT24++ dataset, which spans 55 languages. The mix includes widely spoken languages like Spanish, French, Chinese, and Hindi, along with, including languages that usually don’t receive much training data and lag in quality.

TranslateGemma made fewer translation mistakes than the standard Gemma models across all tested languages. The improvement wasn’t limited to widely spoken languages and remained noticeable even where training data is limited, a common weak point for translation systems.

You may also like: Google Veo 3.1’s Now Lets You Generate Vertical AI videos using Portrait Images

How Google Trained It

TranslateGemma has been built by adapting the translation strengths of Google’s Gemini models into a smaller, open model that requires less computing power while maintaining the quality. In order to keep it an open architecture, Google followed two fine-tuning steps: supervised fine-tuning (SFT) and reinforcement learning (RL).

Beyond the Core Languages

While 55 languages were fully trained and evaluated, Google went further. The company says TranslateGemma has also been trained on nearly 500 additional language pairs.

Although these extra languages are yet to be benchmarked, keeping it as an open model, the full list has been released in a technical report aimed at researchers who want to experiment, fine-tune, or extend the models, particularly for low-resource languages.

As a result, for now, TranslateGemma is being positioned as a foundation-level product and not a finished one.

Text Inside Images Still Works

TranslateGemma also keeps the multimodal abilities of Gemma 3. In tests on the Vistra image translation benchmark, the models showcased performance when translating text found inside images.

A key advantage here would be that these improvements in translating text within images occurred without having to undergo any extra multimodal fine-tuning. The advancements in text translation carried over naturally. As a result, enhancing reliability for tasks such as translating signs, menus, and scanned documents with a camera.

Built for Different Hardware

The three model sizes target very different environments. The 4B model is meant for phones and edge devices. The 12B model can run on consumer laptops, which lowers the barrier for local development and research; meanwhile, the 27B model focuses on maximum quality and can run on a single H100 GPU or TPU in the cloud.

The range of model sizes is intentional. Google wants translation to work across different types of hardware, not just high-end systems.

What to Expect from TranslateGemma

TranslateGemma doesn’t seem to be focusing on adding new features; rather, its interface indicates making more reliable translations available through smaller, more affordable models to be released openly.

For developers, this would mean more flexibility, and for the researchers, they will have a foundation to build from. Regular users will be able to gain access to faster translation, which does not always rely on powerful servers. The practicality of TranslateGemma amongst its users will be something to look out for.

AI & Tech News#Google #Launches #TranslateGemma #Open #Model #Languages1768512419

Leave a Reply

Your email address will not be published. Required fields are marked *

Instagram

[instagram-feed num=6 cols=6 showfollow=false showheader=false showbutton=false showfollow=false]