Tag Archives: assistant

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products that are coming to Canada, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI). 

Why Google? 
These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out 
We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in the way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.

You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you. 
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max, which is coming later to Canada, is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences. 
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter. 
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language. 
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience. 

Assistant, everywhere 
Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

Availability
Here’s some more info on where and when you can get our new hardware in Canada. Visit The Google Store for more info.

  • Pixel 2 and Pixel 2 XL are available for pre-order today, starting at $899, on The Google Store, Bell, Best Buy Canada, Fido, Freedom Mobile, Koodo, Rogers, The Source, TELUS, Tbooth wireless, Walmart, WIRELESSWAVE, Videotron, and Virgin. 
  • Pixel Buds will be available later this year for $219 on The Google Store and Best Buy Canada. 
  • Pixelbook is available in three configurations starting at $1299, so you can choose the processing power, memory and storage you want. The Pixelbook Pen is $129. Both will be available for pre-order today in Canada, with the exception of Quebec, and on sale at The Google Store and select retailers, including Best Buy Canada. We’re working to bring Pixelbook to Quebec in the future. 
  • Google Home Mini is available for pre-order today for $79 on The Google Store, Best Buy Canada and select retailers. 
  • The new Google Daydream View is available for pre-order today for $139 on The Google Store and select retailers. 

Posted by Rick Osterloh, SVP, Hardware

Actions on Google is now available in Australia

Posted by Brad Abrams, Product Manager

Last month we announcedthat UK users can access apps for the Google Assistant on Google Home and their phones—and starting today, we're bringing Actions on Google to Australia. From Perth to Sydney, developers can start building apps for the Google Assistant, giving their userseven more ways to get things done.

Similar to our launch in the UK, your English apps will appear in the local directory automatically. With that said, there are a few things to help make your app a true blue Aussie:

  • New TTS voices: There are a number of new TTS voices with an Australian (english) accent. We've automatically selected one for your app but you can change the selected voice or opt to use your current English US or UK voice by going to the actions console.
  • Practice makes perfect: We also recommend reviewing your response text strings andmaking adjustments to accommodate for differences between the languages, like making sure you know the important things, like candy should be lollies and servo is a gas station.

Our developer tools, documentationand simulatorhave all been updated to make it easy for you to create, test and deploy your app. So what are you waiting for?

UK and Aussie users are just the start, we'll continue to make the Actions on Google platform available in more languages over the coming year. If you have questions about internationalization, please reach out to us on Stackoverflowand Google+.

Kaldi now offers TensorFlow integration

Posted by Raziel Alvarez, Staff Research Engineer at Google and Yishay Carmiel, Founder of IntelligentWire

Automatic speech recognition (ASR) has seen widespread adoption due to the recent proliferation of virtual personal assistants and advances in word recognition accuracy from the application of deep learning algorithms. Many speech recognition teams rely on Kaldi, a popular open-source speech recognition toolkit. We're announcing today that Kaldi now offers TensorFlow integration.

With this integration, speech recognition researchers and developers using Kaldi will be able to use TensorFlow to explore and deploy deep learning models in their Kaldi speech recognition pipelines. This will allow the Kaldi community to build even better and more powerful ASR systems as well as providing TensorFlow users with a path to explore ASR while drawing upon the experience of the large community of Kaldi developers.

Building an ASR system that can understand human speech in every language, accent, environment, and type of conversation is an extremely complex undertaking. A traditional ASR system can be seen as a processing pipeline with many separate modules, where each module operates on the output from the previous one. Raw audio data enters the pipeline at one end and a transcription of recognized speech emerges from the other. In the case of Kaldi, these ASR transcriptions are post processed in a variety of ways to support an increasing array of end-user applications.

Yishay Carmiel and Hainan Xu of Seattle-based IntelligentWire, who led the development of the integration between Kaldi and TensorFlow with support from the two teams, know this complexity first-hand. Their company has developed cloud software to bridge the gap between live phone conversations and business applications. Their goal is to let businesses analyze and act on the contents of the thousands of conversations their representatives have with customers in real-time and automatically handle tasks like data entry or responding to requests. IntelligentWire is currently focused on the contact center market, in which more than 22 million agents throughout the world spend 50 billion hours a year on the phone and about 25 billion hours interfacing with and operating various business applications.

For an ASR system to be useful in this context, it must not only deliver an accurate transcription but do so with very low latency in a way that can be scaled to support many thousands of concurrent conversations efficiently. In situations like this, recent advances in deep learning can help push technical limits, and TensorFlow can be very useful.

In the last few years, deep neural networks have been used to replace many existing ASR modules, resulting in significant gains in word recognition accuracy. These deep learning models typically require processing vast amounts of data at scale, which TensorFlow simplifies. However, several major challenges must still be overcome when developing production-grade ASR systems:

  • Algorithms - Deep learning algorithms give the best results when tailored to the task at hand, including the acoustic environment (e.g. noise), the specific language spoken, the range of vocabulary, etc. These algorithms are not always easy to adapt once deployed.
  • Data - Building an ASR system for different languages and different acoustic environments requires large quantities of multiple types of data. Such data may not always be available or may not be suitable for the use case.
  • Scale - ASR systems that can support massive amounts of usage and many languages typically consume large amounts of computational power.

One of the ASR system modules that exemplifies these challenges is the language model. Language models are a key part of most state-of-the-art ASR systems; they provide linguistic context that helps predict the proper sequence of words and distinguish between words that sound similar. With recent machine learning breakthroughs, speech recognition developers are now using language models based on deep learning, known as neural language models. In particular, recurrent neural language models have shown superior results over classic statistical approaches.

However, the training and deployment of neural language models is complicated and highly time-consuming. For IntelligentWire, the integration of TensorFlow into Kaldi has reduced the ASR development cycle by an order of magnitude. If a language model already exists in TensorFlow, then going from model to proof of concept can take days rather than weeks; for new models, the development time can be reduced from months to weeks. Deploying new TensorFlow models into production Kaldi pipelines is straightforward as well, providing big gains for anyone working directly with Kaldi as well as the promise of more intelligent ASR systems for everyone in the future.

Similarly, this integration provides TensorFlow developers with easy access to a robust ASR platform and the ability to incorporate existing speech processing pipelines, such as Kaldi's powerful acoustic model, into their machine learning applications. Kaldi modules that feed the training of a TensorFlow deep learning model can be swapped cleanly, facilitating exploration, and the same pipeline that is used in production can be reused to evaluate the quality of the model.

We hope this Kaldi-TensorFlow integration will bring these two vibrant open-source communities closer together and support a wide variety of new speech-based products and related research breakthroughs. To get started using Kaldi with TensorFlow, please check out the Kaldi repo and also take a look at an example for Kaldi setup running with TensorFlow.

Actions on Google is now available for British English

Posted by Brad Abrams, Product Manager

Starting today, we're making all your apps built for the Google Assistant available to our en-GB users across Google Home (recently launched in the UK), select Android phones and the iPhone.

While your apps will appear in the local directory automatically this week, to make your apps truly local, here are a couple of things you should do:

  • There are four new TTS voices with an en-GB accent. We've automatically selected one for your app but you can change the selected voice or opt to use your current en-US TTS voice by going to the actions console.
  • We also recommend reviewing all your response text strings and making adjustments to accommodate for differences between the two languages -- e.g., these pesky little Zeds. This will help make your app shine when accessed on the phone.

Apps like Akinator, Blinkist Minute and SongPophave already optimized their experience for en-GB Assistant users—and we can't wait to see who dives in next!

And for those of you who are excited about the ability to target Google Assistant users on eng-GB, now it is the perfect time to start building. Our developer tools, documentationand simulatorhave all been updated to make it easy for you to create, test and deploy your first app.

We'll continue to make the Actions on Google platform available in more languages over the coming year. If you have questions about internationalization, please reach out to us on Stackoverflowand Google+.

Cheerio!

Say bonjour to your Assistant in Google Allo

@Google, ou est le marché Jean-Talon?

Last year, we launched Google Allo with smart features in English, Brazilian Portuguese and Hindi. Today, we’re adding support for these features in French, thanks to our latest update.

Now, for the first time ever, Canadians will be able to interact with their Google Assistant in French! 


The Google Assistant is ready to help en français

Google Allo is our smart messaging app for Android and iOS that helps you say more and do more right in your chats. You can get help from your Assistant without ever leaving the conversation. Sharing sports scores, recipes, or travel plans in French is now easy to do right in your chats with friends.

To start using the Assistant in Canadian French, just say “Talk to me in Canadian French” when you’re chatting with your Assistant in Google Allo. You can also adjust the language setting for your Assistant on your device. So whether you’re looking into weather forecasts for your trip to The Laurentians or for directions to the Olympic Stadium, add @google to your chat and your Assistant is ready to help.

Embracing French-Canadian culture
We’ve made Assistant be truly French Canadian, customizing the app with local elements that are unique to Quebec. From local celebrities and artists to landmarks and cultural institutions, you can ask the Assistant to answer questions that are specifically relevant to French Canada.

Respond rapidement with Smart Reply
We’ve found that Smart Reply in English has been helpful in sending quick responses while you’re chatting on the go. We’re now adding support for Smart Reply in French, so you can quickly send a “Oui” in response to a friend asking “Es-tu en chemin pour la partie de soccer?”.

Smart Reply will recognize the language you’re chatting in and begin to show suggested responses in that language. If you’re chatting in English, it will continue to show English responses. But if you start chatting in French, it will show you suggestions in that language.

Coming soon! Smart Reply will also suggest responses for photos. If your friend sends you a photo of their pet, you’ll see Smart Reply suggestions like “Trop mignon!” And whether you’re a “ah ah” or “?” kind of person, Smart Reply will improve over time and adjust to your style.

We can’t wait for you to say bonjour to Google Allo! We’re beginning to roll out these new features in French for Google Allo on Android and iOS, and they will be available to all users in Canada in the next few days.

In addition to French, we’ll continue to bring the Google Assistant and Smart Reply to more languages over time — stay tuned for more!

Running Android Things on the AIY Voice Kit

Posted by Ryan Bae, Android Things

A major benefit of using Android Things is the ability to prototype connected devices and quickly scale to full commercial products. To further that goal, the Android Things team is partnering with AIY Projects, a new initiative to bring do-it-yourself artificial intelligence to makers. Today, the AIY Projects team launched their first open source reference project: a Raspberry Pi-based Voice Kit with instructions to build a Voice User Interface (VUI) that can use cloud services (like the new Google Assistant SDK or Cloud Speech API) or run completely on-device with TensorFlow. We are releasing a special Android Things Developer Preview 3.1 build for Raspberry Pi 3 to support the Voice Kit. Developers can run Android Things on the Voice Kit with full functionality, including integration with the Google Assistant SDK. To get started, visit the AIY website, download the latest Android Things Developer Preview, and follow the instructions.

The Voice Kit ships out to all MagPi Magazine subscribers on May 4, 2017, and the parts list, assembly instructions, source code, as well as suggested extensions are available on AIY Projects website. The complete kit is also for sale at over 500 Barnes & Noble stores nationwide, as well as UK retailers WH Smith, Tesco, Sainsburys, and Asda.

We are excited to see what you build with the Voice Kit on Android Things. We also encourage you to join Google's IoT Developers Community and Google Assistant SDK Developers on Google+, a great resource to keep up to date and discuss ideas with other developers.

Introducing the Google Assistant SDK

Posted by Chris Ramsdale, Product Manager

When we first announced the Google Assistant, we talked about helping users get things done no matter what device they're using. We started with Google Allo, Google Home and Pixel phones, and expanded the Assistant ecosystem to include Android Wear and Android phones running Marshmallow and Nougat over the last few months. We also announced that Android Auto and Android TV will get support soon.

Today, we're taking another step towards building out that ecosystem by introducing the developer preview of the Google Assistant SDK. With this SDK you can now start building your own hardware prototypes that include the Google Assistant, like a self-built robot or a voice-enabled smart mirror. This allows you to interact with the Google Assistant from any platform.

The Google Assistant SDK includes a gRPC API, a Python open source client that handles authentication and access to the API, samples and documentation. The SDK allows you to capture a spoken query, for example "what's on my calendar", pass that up to the Google Assistant service and receive an audio response. And while it's ideal for prototyping on Raspberry Pi devices, it also adds support for many other platforms.

To get started, visit the Google Assistant SDK website for developers, download the SDK, and start building. In addition, Wayne Piekarski from our Developer Relations team has a video introducing the Google Assistant SDK, below.


And for some more inspiration, try our samples or check out an example implementation by Deeplocal, an innovation studio out of Pittsburgh that took the Google Assistant SDK for a spin and built a fun mocktails mixer. You can even build one for yourself: go here to learn more and read their documentationon Github. Or check out the video below on how they built their demo from scratch.


This is a developer preview and we have a number of features in development including hotword support, companion app integration and more. If you're interested in building a commercial product with the Google Assistant, we encourage you to reach out and contact us. We've created a new developer community on Google+ at g.co/assistantsdkdev for developers to keep up to date and discuss ideas. There is also a stackoverflow tag [google-assistant-sdk] for questions, and a mailing list to keep up to date on SDK news. We look forward to seeing what you create with the Google Assistant SDK!

Game developers rejoice—new tools for developing on Actions on Google

By Sunil Vemuri, Product Manager for Actions on Google

Since we launchedthe Actions on Google platform last year, we've seen a lot of creative actions for use cases ranging from meditation to insurance. But one of the areas where we're especially excited is gaming. Games like Akinator to SongPop demonstrate that developers can create new and engaging experiences for users. To bring more great games online, we're adding new tools to Actions on Google to make it easier than ever for you to build games for the Google Assistant.

First, we're releasing a brand new sound effect library. These effects can make your games more engaging, help you create a more fun persona for your action, and hopefully put smiles on your users' faces. From airplanes, slide whistles, and bowlingto cats purring and thunder, you're going to find hundreds of options that will add some pizzazz to your Action.

Second, for those of you who feel nostalgic about interactive text adventures, we just published a handy guide on how to bring these games to life with the Google Assistant. With many old favorites being open source or in the public domain, you are now able to re-introduce these classics to Google Assistant users on Google Home.

Finally, for those of you who are looking to build new types of games, we've recently expanded the list of tool and consulting companies that have integrated their development solutions with Actions on Google. New collaborators like Pullstring, Converse.AI, Solstice and XAPP Media are now also able to help turn your vision into reality.

We can't wait to see how you use our sound library and for the new and classic games you'll bring to Google Assistant users on Google Home! Make sure you join our Google+ community to discuss Actions on Google with other developers.