Google Puts Advanced Gemini AI Tech into ‘Open Model’ Gemma — Virtualization Review


News

Google Puts Advanced Gemini AI Tech into ‘Open Model’ Gemma

Open source and advanced AI don’t seem to be a good match, with most offerings in the latter category full of proprietary tech. But Google is following a new “open model” trend to change that with the release of Gemma, the first of a new generation of large language models (LLMs) coming from the cloud giant.

The company today (Feb. 21) said the purpose of this new generation of open models led by Gemma is to help developers build AI responsibly.


Gemma
[Click on image for larger view.] Gemma (source: Google).

Open Models
So what is an open model and how does it differ from open source?

“Open models feature free access to the model weights, but terms of use, redistribution, and variant ownership vary according to a model’s specific terms of use, which may not be based on an open-source license,” Google said in a Feb. 21 post. “The Gemma models’ terms of use make them freely available for individual developers, researchers, and commercial users for access and redistribution.”

Noting that “existing open-source concepts can’t always be directly applied to AI systems,” Google said it’s important to carry forward open principles that have changed the AI game while also clarifying the concept of open-source AI and addressing concepts like derived work and author attribution.

That clarification, the company said, is addressed in the blog post, “Open Source AI Definition: Where it stands and what’s ahead” from Voices of Open Source.

Draft verbiage on the site says that to be open source, an AI system needs to be available under legal terms that grant the freedoms to:

  • Use the system for any purpose and without having to ask for permission.
  • Study how the system works and inspect its components.
  • Modify the system to change its recommendations, predictions or decisions to adapt to your needs.
  • Share the system with or without modifications, for any purpose.

Gemma, the Tech
Putting open source considerations aside, Google said Gemma borrows from Gemini tech, described as its largest and most capable AI model widely available today. It made a big splash in December, debuting with three model sizes, with the top tier being “especially sophisticated.”


Gemini Pro
[Click on image for larger view.] Gemini Pro (source: Google).

Sharing Gemini tech, Google said, “enables Gemma 2B and 7B to achieve best-in-class performance for their sizes compared to other open models. And Gemma models are capable of running directly on a developer laptop or desktop computer. Notably, Gemma surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs. See the technical report for details on performance, dataset composition, and modeling methodologies.”


Gemma Scores
[Click on image for larger view.] Gemma Scores (source: Google).

The Cloud
As with most advanced AI, the cloud comes into play, as Google said Gemma was “optimized for Google Cloud.” The company also recently infused brand-new Gemini Pro LLM tech into its Vertex AI platform.

“Vertex AI provides a broad MLOps toolset with a range of tuning options and one-click deployment using built-in inference optimizations,” Google said. “Advanced customization is available with fully-managed Vertex AI tools or with self-managed GKE, including deployment to cost-efficient infrastructure across GPU, TPU, and CPU from either platform.”

Responsibility
As noted, Google also provided a Responsible Generative AI Toolkit to provide guidance and essential tools for creating safer AI applications with Gemma. Highlights of the kit as presented by Google in a separate post include:

  • Safety classification: Google provides a novel methodology for building robust safety classifiers with minimal examples.
  • Debugging: A model debugging tool helps users investigate Gemma’s behavior and address potential issues.
  • Guidance: Users can access best practices for model builders based on Google’s experience in developing and deploying large language models.

Saying that it believed sharing the Gemma tech will not just help increase access to AI technology, but will also help the industry develop new approaches to safety and responsibility, Google concluded: “As developers adopt Gemma models and other safety-aligned open models, we look forward to working with the open-source community to develop more solutions for responsible approaches to AI in the open ecosystem. A global diversity of experiences, perspectives, and opportunities will help build safe and responsible AI that works for everyone.”

Gemma models are available in popular model hubs including Kaggle Models, Vertex AI Model Garden and Hugging Face Models. The Gemma site provides links to quickstarts on Kaggle and other guidance for getting started with the models on Google Cloud (Vertex), Colab and others, along with partner guides from Hugging Face and NVIDIA.

Summing Up
Google summed up all of the above and more in a bullet-point list complete with links for more information:

About the Author



David Ramel is an editor and writer for Converge360.





Source link