However, the perception that using NLU and machine learning is costly and time consuming prevents a lot of potential users from exploring its benefits.
To dispel some of the intimidation of using NLU, and to demonstrate how it can be easily used with pre-trained, generic models, we have released a tool, the Semantic Reactor, and open-sourced example code, The Mystery of the Three Bots.
The Semantic ReactorThe Semantic Reactor is a Google Sheets Add-On that allows the user to sort lines of text in a sheet using a variety of machine-learning models. It is released as a whitelisted experiment, so if you would like to check it out, fill out this application at the Google Cloud AI Workshop. Once approved, you’ll be emailed instructions on how to install it.
The tool offers ranking methods that determine how the list will be sorted. With the semantic similarity method, the lines more similar in meaning to the input will be ranked higher.
With the input-response method, the lines that are the most appropriate conversational responses are ranked higher.
Why use the Semantic Reactor?There are a lot of interesting things you can do with the Semantic Reactor, but let’s look at the following two:
- Writing dialogue for a bot that exists within a well-defined environment and has a clear purpose (like a customer service bot) using semantic similarity.
- Searching within large collections of text, like from a message board. For that, we will use input-response.
Writing Dialogue for a Bot Using Semantic SimilarityFor the sake of an example, let’s say you are writing dialogue for a bot that answers questions about a product, in this case, cookies.
If you’ve been running a cookie hotline for a while, you probably can list the most common cookie questions. With that data, you can create your cookie bot. Start by opening a Google Sheet and writing the common questions and answers (questions in the A column, answers in the B).
Here is the start of what that Sheet might look like. Make a copy of the Sheet, which will allow you to use the Semantic Reactor Add-on. Use the tool to experiment with new QA pairs and how each model reacts to them.
Here are a few queries to try, using the semantic similarity rank method:
Query: What are cookie ingredients?
Returns: What are cookies made of?
Query: Are cookies biscuits?
Returns: Are cookies also called biscuits?
Query: What should I serve with cookies?
Returns: What drinks go well with cookies?
Of course, that small list of responses won’t cover many of the questions people will ask your cookie bot. What the Reactor allows you to do is quickly add new QA pairs as you learn about what your users want to ask.
For example, maybe people are asking a lot about cookie calories.
You’d write the new question in column A, and the new answer in column B, and then test a few different phrasings with the Reactor. You might need to tweak the target response a few times to make sure it matches a wide variety of phrasings. You should also experiment with the three different models to see which one performs the best.
For instance, let’s say the new target question you want the model to match to is: “How many calories does a typical cookie have?”
That question might be phrased by users as:
- Are cookies caloric?
- A lot of calories in a cookie?
- Will cookies wreck my diet?
- Are cookies fattening?
The more you test with live users, the more you’ll find that they phrase their questions in ways you don’t expect. As with all things based on machine learning, constantly refreshing data, testing and improvement is all part of the process.
Searching Through Text Using Input-ResponseSometimes you can’t anticipate what users are going to ask, and sometimes you might be dealing with a lot of potential responses, maybe thousands. In cases like that, you should use the input-response ranking method. That means the model will examine the list of potential responses and then rank each one according to what it thinks is the most likely response.
Here is a Sheet containing a list of simple conversational responses. Using the input-response ranking method, try a few generic conversational openers like “Hello” or “How’s it going?”
Note that in input-response mode, the model is predicting the most likely conversational response to an input and not the most semantically similar response.
Note that “Hello,” in input-response mode, returns “Nice to meet you.” In semantic similarity mode, “Hello” returns what the model thinks is semantically closest to “Hello,” which is “What’s up?”
Now try your own! Add potential responses. Switch between the models and ranking methods to see how it changes the results (be sure to hit the “reload” button every time you add new responses).
Example CodeOne of the models available on TensorFlow Hub is the Universal Sentence Encoder Lite. It’s only 1.6MB and is suitable for use within websites and on-device applications.
An open sourced sample game that uses the USE Lite is Mystery of the Three Bots on Github. It’s a simple demonstration that shows how you can use a small semantic ML model to drive conversations with game characters. The corpora the game uses were created and tested using the Semantic Reactor.
You can play a running version of the game here. You can experiment with the corpora of two of the characters, the Maid and the Butler, contained within this Sheet. Be sure to make a copy of the Sheet so you can edit and add new QA pairs.
Where To Get The Models Used Within The Semantic ReactorAll of the models used in the Semantic Reactor are published and available online.
- Local – Minified TensorFlow.js version of the Universal Sentence Encoder.
- Basic Online – Basic version of the Universal Sentence Encoder.
- Multilingual Online – Universal Sentence Encoder trained on question/ answer pairs in 16 languages.
Final ThoughtsThese language models are far from perfect. They use their training to give a best estimate on what to return based on the list of responses you gave it. Machine learning is about calculation, prediction, and training. Models can be improved over time with more data and tuning, and in turn, be made more accurate.
Also, because conversational models are trained on dialogue between people, and because people are biased, the models will display biases that exist in the data that they were trained on, sometimes in ways you can’t predict. For more on model bias, and more detail about how these models were trained, see the Semantic Experiences for Developers page.
By Ben Pietrzak, Steve Pucci, Aaron Cohen — Google AI