Tag Archives: PaLM API

How it’s Made: TextFX is a suite of AI tools inspired by Lupe Fiasco’s lyrical and linguistic techniques

Posted by Aaron Wade, Creative Technologist

Google Lab Sessions is a series of experimental AI collaborations with innovators. In our latest Lab Session we wanted to explore specifically how AI could expand human creativity. So we turned to GRAMMY® Award-winning rapper and MIT Visiting Scholar Lupe Fiasco to build an AI experiment called TextFX.



The discovery process

We started by spending time with Lupe to observe and learn about his creative process. This process was invariably marked by a sort of linguistic “tinkering”—that is, deconstructing language and then reassembling it in novel and innovative ways. Some of Lupe’s techniques, such as simile and alliteration, draw from the canon of traditional literary devices. But many of his tactics are entirely unique. Among them was a clever way of creating phrases that sound identical to a given word but have different meanings, which he demonstrated for us using the word “expressway”:

express whey (speedy delivery of dairy byproduct)

express sway (to demonstrate influence)

ex-press way (path without news media)

These sorts of operations played a critical role in Lupe’s writing. In light of this, we began to wonder: How might we use AI to help Lupe explore creative possibilities with text and language?

When it comes to language-related applications, large language models (LLMs) are the obvious choice from an AI perspective. LLMs are a category of machine learning models that are specially designed to perform language-related tasks, and one of the things we can use them for is generating text. But the question still remained as to how LLMs would actually fit into Lupe’s lyric-writing workflow.

Some LLMs such as Google’s Bard are fine-tuned to function as conversational agents. Others such as the PaLM API’s Text Bison model lack this conversational element and instead generate text by extending or fulfilling a given input text. One of the great things about this latter type of LLM is their capacity for few-shot learning. In other words, they can recognize patterns that occur in a small set of training examples and then replicate those patterns for novel inputs.

As an initial experiment, we had Lupe provide more examples of his same-sounding phrase technique. We then used those examples to construct a prompt, which is a carefully crafted string of text that primes the LLM to behave in a certain way. Our initial prompt for the same-sounding phrase task looked like this:

Word: defeat
Same-sounding phrase: da feet (as in "the feet")

Word: surprise
Same-sounding phrase: Sir Prize (a knight whose name is Prize)

Word: expressway
Same-sounding phrase: express whey (speedy delivery of dairy byproduct)

(...additional examples...)

Word: [INPUT WORD]
Same-sounding phrase:


This prompt yielded passable outputs some of the time, but we felt that there was still room for improvement. We actually found that factors beyond just the content and quantity of examples could influence the output—for example, how the task is framed, how inputs and outputs are represented, etc. After several iterations, we finally arrived at the following:

A same-sounding phrase is a phrase that sounds like another word or phrase.


Here is a same-sounding phrase for the word "defeat":

da feet (as in "the feet")


Here is a same-sounding phrase for the word "surprise":

Sir Prize (a knight whose name is Prize)


Here is a same-sounding phrase for the word "expressway":

express whey (speedy delivery of dairy byproduct)


(...additional examples...)


Here is a same-sounding phrase for the word "[INPUT WORD]":

After successfully codifying the same-sounding word task into a few-shot prompt, we worked with Lupe to identify additional creative tasks that we might be able to accomplish using the same few-shot prompting strategy. In the end, we devised ten prompts, each uniquely designed to explore creative possibilities that may arise from a given word, phrase, or concept:

SIMILE - Create a simile about a thing or concept.

EXPLODE - Break a word into similar-sounding phrases.

UNEXPECT - Make a scene more unexpected and imaginative.

CHAIN - Build a chain of semantically related items.

POV - Evaluate a topic through different points of view.

ALLITERATION - Curate topic-specific words that start with a chosen letter.

ACRONYM - Create an acronym using the letters of a word.

FUSE - Create an acronym using the letters of a word.

SCENE - Create an acronym using the letters of a word.

UNFOLD - Slot a word into other existing words or phrases.

We were able to quickly prototype each of these ideas using MakerSuite, which is a platform that lets users easily build and experiment with LLM prompts via an interactive interface.

Moving image showing a few-shot prompt in MakerSuite

How we made it: building using the PaLM API

After we finalized the few-shot prompts, we built an app to house them. We decided to call it TextFX, drawing from the idea that each tool has a different “effect” on its input text. Like a sound effect, but for text.

Moving image showing the TextFX user interface

We save our prompts as strings in the source code and send them to Google’s PaLM 2 model using the PaLM API, which serves as an entry point to Google’s large language models.

All of our prompts are designed to terminate with an incomplete input-output pair. When a user submits an input, we append that input to the prompt before sending it to the model. The model predicts the corresponding output(s) for that input, and then we parse each result from the model response and do some post-processing before finally surfacing the result in the frontend.

Diagram of information flow between TextFX and Google's PaLM 2 large language models

Users may optionally adjust the model temperature, which is a hyperparameter that roughly corresponds to the amount of creativity allowed in the model outputs.

Try it yourself

You can try TextFX for yourself at textfx.withgoogle.com.

We’ve also made all of the LLM prompts available in MakerSuite. If you have access to the public preview for the PaLM API and MakerSuite, you can create your own copies of the prompts using the links below. Otherwise, you can join the waitlist.


And in case you’d like to take a closer look at how we built TextFX, we’ve open-sourced the code here.

If you want to try building with the PaLM API and MakerSuite, join the waitlist.

A final word

TextFX is an example of how you can experiment with the PaLM API and build applications that leverage Google’s state of the art large language models. More broadly, this exploration speaks to the potential of AI to augment human creativity. TextFX targets creative writing, but what might it mean for AI to enter other creative domains as a collaborator? Creators play a crucial role in helping us imagine what these collaborations might look like. Our hope is that this Lab Session gives you a glimpse of what’s possible using the PaLM API and inspires you to use Google’s AI offerings to bring your own ideas to life, in whatever your craft may be.

If you’d like to explore more Lab Sessions like this one, head over to labs.google.com.

PaLM API & MakerSuite moving into public preview

Posted by Barnaby James, Director, Engineering, Google Labs and Simon Tokumine, Director, Product Management, Google Labs

At Google I/O, we showed how PaLM 2, our next generation model, is being used to improve products across Google. Today, we’re making PaLM 2 available to developers so you can build your own generative AI applications through the PaLM API and MakerSuite. If you’re a Google Cloud customer, you can also use PaLM API in Vertex AI.


The PaLM API, now powered by PaLM 2

We’ve instruction-tuned PaLM 2 for ease of use by developers, unlocking PaLM 2’s improved reasoning and code generation capabilities and enabling developers to easily use the PaLM API for use cases like content and code generation, dialog agents, summarization, classification, and more using natural language prompting. It’s highly efficient, thanks to its new model architecture improvements, so it can handle complex prompts and instructions which, when combined with our TPU technologies, enable speeds as high as 75+ tokens per second and 8k context windows.

Integrating the PaLM API into the developer ecosystem

Since March, we've been running a private preview with the PaLM API, and it’s been amazing to see how quickly developers have used it in their applications. Here are just a few:

  • GameOn Technology has used the chat endpoint to build their next-gen chat experience to bring fans together and summarize live sporting events
  • Vercel has been using the text endpoint to build a video title generator
  • Wendy’s has used embeddings so customers can place the correct order with their talk-to-menu feature

We’ve also been excited by the response from the developer tools community. Developers want choice in language models, and we're working with a range of partners to be able to access the PaLM API in the common frameworks, tools, and services that you’re using. We’re also making the PaLM API available in Google developer tools, like Firebase and Colab.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
The PaLM API and MakerSuite make it fast and easy to use Google’s large language models to build innovative AI applications

Build powerful prototypes with the PaLM API and MakerSuite

The PaLM API and MakerSuite are now available for public preview. For developers based in the U.S., you can access the documentation and sign up to test your own prototypes at no cost. We showed two demos at Google I/O to give you a sense of how easy it is to get started building generative AI applications.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
We demoed Project Tailwind at Google I/O 2023, an AI-first notebook that helps you learn faster using your notes and sources

Project Tailwind is an AI-first notebook that helps you learn faster by using your personal notes and sources. It’s a prototype that was built with the PaLM API by a core team of five engineers at Google in just a few weeks. You simply import your notes and documents from Google Drive, and it essentially creates a personalized and private AI model grounded in your sources. From there, you can prompt it to learn about anything related to the information you’ve provided it. You can sign up to test it now.

Image of logos of PaLM API partners including Baseplate, Gradient, Hubble, Magick, Stack, Vellum, Vercel, Weaviate. Text reads, 'Integrated into Google tools you already use' Blelow this is the Firebase logo
MakerSuite was used to help create the descriptions in I/O FLIP

I/O FLIP is an AI-designed take on a classic card game where you compete against opposing players with AI-generated cards. We created millions of unique cards for the game using DreamBooth, an AI technique invented in Google Research, and then populated the cards with fun descriptions. To build the descriptions, we used MakerSuite to quickly experiment with different prompts and generate examples. You can play I/O FLIP and sign up for MakerSuite now.

Over the next few months, we’ll keep expanding access to the PaLM API and MakerSuite. Please keep sharing your feedback on the #palm-api channel on the Google Developer Discord. Whether it’s helping generate code, create content, or come up with ideas for your app or website, we want to help you be more productive and creative than ever before.