Listen

Description

This document announces EmbeddingGemma, a new open embedding model from Google, specifically designed for on-device artificial intelligence (AI). It highlights the model's efficiency, compact size, and best-in-class performance for its category, particularly in multilingual text embedding. The source explains how EmbeddingGemma enables mobile-first Retrieval Augmented Generation (RAG) pipelines and semantic search by generating high-quality text embeddings directly on user hardware, ensuring privacy and offline functionality. It also details the model's compatibility with popular development tools and its ability to offer flexible output dimensions while maintaining a small memory footprint. Finally, it contrasts EmbeddingGemma's strengths for on-device applications with other Google models suited for large-scale server-side use.

Source:

https://developers.googleblog.com/en/introducing-embeddinggemma/