Tag Archives: MakerSuite

Make with MakerSuite Part 2: Tuning LLMs

Posted by Pranay Bhatia – Product Manager, Google Labs

AI is changing how developers work, and it’s also making it possible for more people to build. In Part 1, we learned how MakerSuite can be used to easily prompt LLMs through plain language. Today, in Part 2, we’re introducing Tuning in MakerSuite, which will let you customize a model for your specific needs in minutes.

What is tuning?

In Part 1, we introduced a technique called few-shot prompting to improve a model’s performance by giving it a handful of examples. Tuning improves on this technique by training the model on many more examples—so many that they can’t all fit in the prompt.


Fine-tuning vs. Parameter Efficient Tuning

You may have heard about classic “fine-tuning” of models. This is where a pre-trained model is adapted to a particular task by training it on a smaller set of task-specific labeled data. But with today’s LLMs and their huge number of parameters, re-training is complex: it requires machine learning expertise, lots of data, and lots of compute.

Tuning in MakerSuite uses a technique called Parameter Efficient Tuning (PET) to produce customized, high-quality models without the additional costs and complexity of traditional fine-tuning. In addition, PET produces high quality models with as little as a few hundred data points, reducing the burden of data collection for the developer.


Tune models in MakerSuite in minutes


1. Create a tuned model

It’s easy to tune models in MakerSuite. Simply select “Create new” and choose “Tuned model.”

Moving image of how to access 'Tuned Model' option from Create New menu in MakerSuite

2. Select data for tuning

You can tune your model from a saved data prompt or import data from Google Sheets or a CSV file. We recommend using at least 100 examples to get the best performance before you hit the Tune button.

Moving image of importing data for tuning into MakerSuite

3. View your tuned model

View your tuning progress in your library. Once the model has finished tuning, you can view the details by clicking on your model.

Moving image of viewing details of a model once it has fiunished tuning

4. Run your tuned model

To start using your newly tuned model, create a new text or data prompt and select your newly tuned model from the list of available models.

Image showing location of model in list of available models in MakerSuite


MakerSuite: a powerful, easy tool for tuning

Tuning in MakerSuite empowers developers to harness the full potential of models like PaLM 2 with delightful ease. Whether you've already tuned a model with the API or just started experimenting with generative AI, you’ll find that MakerSuite opens up exciting possibilities to make the model more relevant and effective for your own application in just minutes.

Make with MakerSuite – Part 1: An Introduction

Posted by Ray Thai – Product Manager, Labs

We’re always on the lookout for tools and technologies that bring innovative solutions to our developer community. Generative AI refers to the ability of machine learning models, such as Large Language Models (LLMs) trained on massive amounts of data, to learn patterns and create new content such as text, images, videos, or audio. These are still under development, but we’re already seeing how models like PaLM 2 can enhance the quality of our code to make us more productive with tools like Project IDX and Android’s Studio Bot, or help us build new innovative user experiences like Bard. It’s exciting how simple it is to interact with these powerful LLMs so we’re kicking off a 5-part series called “Make with MakerSuite” to show you how easy it is to get started.


What is MakerSuite?

MakerSuite is a fast, easy way to start building generative AI apps. It provides an efficient UI for prompting some of Google’s latest models and easily translates prompts into production-ready code you can integrate into your applications. Today, we’ve removed the waitlist so anyone in 179 countries and territories can use MakerSuite.

The art of prompting LLMs

Interacting with LLMs is as straightforward as crafting a plain language prompt, making it accessible to everyone. Prompts can be as simple as a single input, but you have the flexibility to provide additional context or examples, effectively guiding the model to produce the most optimal response. You'll observe that you can achieve different outcomes by simply tweaking the way you phrase your prompts. To harness the power of these models safely and effectively, careful crafting and iterative refining becomes essential.

Choosing the Right Prompt Type: Text, Data, or Chat?

When it comes to using MakerSuite, there are three prompt types to help you achieve your goals.

1. Text Prompts: Unleash Your Creativity

Text prompts in MakerSuite provide a flexible and freeform experience that allows you to express yourself creatively through your prompts. Whether you're a beginner or an experienced user, text prompts offer a simple way to interact with the model.

image showing user generating ideas in MakerSuite
Generating ideas for a dinner party using a text prompt in MakerSuite

2. Data Prompts: Structured Few-Shot Prompts

Data prompts are the go-to choice when you have examples to help you specify precisely what you want from the model. They are perfect for applications that require a consistent input and output format such as data generation, translation, and more.

image showing user creating a reverse dictionary in MakerSuite
A reverse dictionary using a data prompt in MakerSuite


3. Chat Prompts: Building Conversational Experiences

If your goal is to create interactive chatbots or to simulate conversations, chat prompts are the solution! These prompts enable you to build engaging and interactive conversational experiences.

Image showing user chatting with a snowman in MakerSuite
Chatting with a snowman using a chat prompt in MakerSuite

No matter which prompt type you choose, you’ll find how easy it is to use MakerSuite to prompt some of the latest models from Google to build exciting, new user experiences.


We can’t wait to see what you build

AI is fundamentally reshaping the landscape of developer work and creativity, and we’re committed to empowering our developer community with access to cutting-edge models. We believe an open and collaborative developer community fuels progress and we're thrilled to see companies like LlamaIndex and Chroma harnessing MakerSuite as building blocks for their own innovations.

You can sign up to get started with MakerSuite in 179 countries and territories.You’ll find sample prompts for inspiration or just start prompting to see what the model generates. Once you’re happy with your configuration, easily export to code from MakerSuite and start integrating into your applications, products, and services. If you prefer to prompt our models directly with the API, sign up and grab your API key from MakerSuite to start!

PaLM API & MakerSuite moving into public preview

Posted by Barnaby James, Director, Engineering, Google Labs and Simon Tokumine, Director, Product Management, Google Labs

At Google I/O, we showed how PaLM 2, our next generation model, is being used to improve products across Google. Today, we’re making PaLM 2 available to developers so you can build your own generative AI applications through the PaLM API and MakerSuite. If you’re a Google Cloud customer, you can also use PaLM API in Vertex AI.


The PaLM API, now powered by PaLM 2

We’ve instruction-tuned PaLM 2 for ease of use by developers, unlocking PaLM 2’s improved reasoning and code generation capabilities and enabling developers to easily use the PaLM API for use cases like content and code generation, dialog agents, summarization, classification, and more using natural language prompting. It’s highly efficient, thanks to its new model architecture improvements, so it can handle complex prompts and instructions which, when combined with our TPU technologies, enable speeds as high as 75+ tokens per second and 8k context windows.

Integrating the PaLM API into the developer ecosystem

Since March, we've been running a private preview with the PaLM API, and it’s been amazing to see how quickly developers have used it in their applications. Here are just a few:

  • GameOn Technology has used the chat endpoint to build their next-gen chat experience to bring fans together and summarize live sporting events
  • Vercel has been using the text endpoint to build a video title generator
  • Wendy’s has used embeddings so customers can place the correct order with their talk-to-menu feature

We’ve also been excited by the response from the developer tools community. Developers want choice in language models, and we're working with a range of partners to be able to access the PaLM API in the common frameworks, tools, and services that you’re using. We’re also making the PaLM API available in Google developer tools, like Firebase and Colab.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
The PaLM API and MakerSuite make it fast and easy to use Google’s large language models to build innovative AI applications

Build powerful prototypes with the PaLM API and MakerSuite

The PaLM API and MakerSuite are now available for public preview. For developers based in the U.S., you can access the documentation and sign up to test your own prototypes at no cost. We showed two demos at Google I/O to give you a sense of how easy it is to get started building generative AI applications.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
We demoed Project Tailwind at Google I/O 2023, an AI-first notebook that helps you learn faster using your notes and sources

Project Tailwind is an AI-first notebook that helps you learn faster by using your personal notes and sources. It’s a prototype that was built with the PaLM API by a core team of five engineers at Google in just a few weeks. You simply import your notes and documents from Google Drive, and it essentially creates a personalized and private AI model grounded in your sources. From there, you can prompt it to learn about anything related to the information you’ve provided it. You can sign up to test it now.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
MakerSuite was used to help create the descriptions in I/O FLIP

I/O FLIP is an AI-designed take on a classic card game where you compete against opposing players with AI-generated cards. We created millions of unique cards for the game using DreamBooth, an AI technique invented in Google Research, and then populated the cards with fun descriptions. To build the descriptions, we used MakerSuite to quickly experiment with different prompts and generate examples. You can play I/O FLIP and sign up for MakerSuite now.

Over the next few months, we’ll keep expanding access to the PaLM API and MakerSuite. Please keep sharing your feedback on the #palm-api channel on the Google Developer Discord. Whether it’s helping generate code, create content, or come up with ideas for your app or website, we want to help you be more productive and creative than ever before.