Google has released three new AI models that are safer, smaller, and more transparent than most existing models.
These AI models, called Gemma 2 2B, ShieldGemma, and Gemma Scope, are each designed for different applications and use cases, but all have safety at their core. As part of the Gemma 2 series, which Google launched in May, they’re unlike the Gemini AI model series because they’re open, meaning developers have full access to their source code.
Gemma 2 2B is a lightweight model designed for text analysis and generation, ideal for researchers and commercial applications. Due to its size, it can be run on most hardware, including laptops and edge devices, and it can be downloaded from Google’s Vertex AI model library, the data science platform Kaggle, and Google’s AI Studio toolkit.
ShieldGemma is a set of “safety classifiers” that can detect toxic content like hate speech, harassment, and sexually explicit material and actively monitor and filter out potentially harmful content or prompts.
Gemma Scope is a tool that offers developers the chance to discover the inner workings of Gemma 2, giving them insights into how it identifies patterns, processes information, and makes predictions, in an easy-to-digest way.