Monthly Archives: August 2018

Restricting ads in third-party tech support services

One of our top priorities is to maintain a healthy advertising ecosystem, and that means protecting people from misleading, inappropriate and harmful ads. We have teams of engineers, policy experts, product managers and others who wage a daily fight against bad actors. Over the years, this commitment has made the web a better place for our users—and a worse place for those who seek to abuse advertising systems for their own gain. Just last year alone, we took down more than 3.2 billion ads that violated our advertising policies—that’s more than 100 bad ads per second.

When we see an increase in misleading or predatory behaviors in specific categories, we take additional action. For example, we’ve banned ads for payday loans and bail bonds services—and developed advanced verification programs to fight fraud in areas like local locksmith services and addiction treatment centers.

Today, we’re taking another step. We’ve seen a rise in misleading ad experiences stemming from third-party technical support providers and have decided to beginrestricting ads in this category globally. For many years, we’ve consulted and worked with law enforcement and government agencies to address abuse in this area. As the fraudulent activity takes place off our platform, it’s increasingly difficult to separate the bad actors from the legitimate providers. That’s why in the coming months, we will roll out a verification program to ensure that only legitimate providers of third-party tech support can use our platform to reach consumers.

These efforts alone won’t stop all bad actors trying to game our advertising systems, but it will make it a lot harder. There’s more to do, and we’ll continue committing the resources necessary to keep the online advertising ecosystem a safe place for everyone.

Source: Google Ads


Understanding Performance Fluctuations in Quantum Processors



One area of research the Google AI Quantum team pursues is building quantum processors from superconducting electrical circuits, which are attractive candidates for implementing quantum bits (qubits). While superconducting circuits have demonstrated state-of-the-art performance and extensibility to modest processor sizes comprising tens of qubits, an outstanding challenge is stabilizing their performance, which can fluctuate unpredictably. Although performance fluctuations have been observed in numerous superconducting qubit architectures, their origin isn’t well understood, impeding progress in stabilizing processor performance.

In “Fluctuations of Energy-Relaxation Times in Superconducting Qubits” published in this week’s Physical Review Letters, we use qubits as probes of their environment to show that performance fluctuations are dominated by material defects. This was done by investigating qubits’ energy relaxation times (T1) — a popular performance metric that gives the length of time that it takes for a qubit to undergo energy-relaxation from its excited to ground state — as a function of operating frequency and time.

In measuring T1, we found that some qubit operating frequencies are significantly worse than others, forming energy-relaxation hot-spots (see figure below). Our research suggests that these hot spots are due to material defects, which are themselves quantum systems that can extract energy from qubits when their frequencies overlap (i.e. are “resonant”). Surprisingly, we found that the energy-relaxation hot spots are not static, but “move” on timescales ranging from minutes to hours. From these observations, we concluded that the dynamics of defects’ frequencies into and out of resonance with qubits drives the most significant performance fluctuations.
Left: A quantum processor similar to the one that was used to investigate qubit performance fluctuations. One qubit is highlighted in blue. Right: One qubit’s energy-relaxation time “T1” plotted as a function of it’s operating frequency and time. We see energy-relaxation hotspots, which our data suggest are due to material defects (black arrowheads). The motion of these hotspots into and out-of resonance with the qubit are responsible for the most significant energy-relaxation fluctuations. Note that these data were taken over a frequency band with an above-average density of defects.
These defects — which are typically referred to as two-level-systems (TLS) — are commonly believed to exist at the material interfaces of superconducting circuits. However, even after decades of research, their microscopic origin still puzzles researchers. In addition to clarifying the origin of qubit performance fluctuations, our data shed light on the physics governing defect dynamics, which is an important piece of this puzzle. Interestingly, from thermodynamics arguments we would not expect the defects that we see to exhibit any dynamics at all. Their energies are about one order of magnitude higher than the thermal energy available in our quantum processor, and so they should be “frozen out.” The fact that they are not frozen out suggests their dynamics may be driven by interactions with other defects that have much lower energies and can thus be thermally activated.

The fact that qubits can be used to investigate individual material defects - which are believed to have atomic dimensions, millions of times smaller than our qubits - demonstrates that they are powerful metrological tools. While it’s clear that defect research could help address outstanding problems in materials physics, it’s perhaps surprising that it has direct implications on improving the performance of today’s quantum processors. In fact, defect metrology already informs our processor design and fabrication, and even the mathematical algorithms that we use to avoid defects during quantum processor runtime. We hope this research motivates further work into understanding material defects in superconducting circuits.

Source: Google AI Blog


Beta Channel Update for Chrome OS

The Beta channel has been updated to 69.0.3497.73 (Platform version: 10895.40.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. A list of changes can be found here.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Cindy Bayless
Google Chrome

Meet the first class of Launchpad Accelerator India

https://lh5.googleusercontent.com/vzdswujvkmcDy9yuoibSb4ez4yPnjOpKKEctBLW5fRLCT6tuGE871QGYrBGfI_mQl_inue2aayAWQlA2GzGsa9sV85F_wQGc0YsQlIrK8sZr-E4KNNQJIQ08-2eonFWgGhzZQfsw
We opened applications to our Launchpad Accelerator India class in July 2018, and today we are excited to announce the start of the first batch. The inaugural class includes 10 startups having a presence all across India, including Bengaluru, Thrissur, Thiruvananthapuram, Jaipur, Visakhapatnam, Pune & Mumbai.


Key criteria for selection required India-only startups to be solving for India’s needs, with the use of advanced technologies such as AI/ML. The shortlisted startups are doing incredible work in areas ranging from, enhancing employability & earning-abilities of the blue-collar workforce to using satellite imagery to support farmers’ decision making process, and were announced at the fourth edition of Google For India.
Launchpad Accelerator India is Google’s three month program that begins with a 2-week intensive mentorship bootcamp in Bangalore, followed by customized virtual support for the remaining duration. Startups will get access to the best of Google -- including mentorship from Google’s leadership team, equity free support and cloud credits. The first class kicks off on September 10th 2018, at the Google office in Bengaluru.


The startups in Class I are:
(1) CareNx: Smartphone enabled fetal heart monitor for early detection of fetal asphyxia
(2) Wysa: AI-based chat therapy for mental health.
(3) MultiBhashi: Simplified Language Learning Platform for Blue-Collar workforce & Next Billion Indian online users.
(4) Genrobotics: A semi-automatic robot for manhole cleaning directed to bring the practice of manual scavenging to an end in India.
(5) OliveWear: A connected health ecosystem of doctors to give personalized maternal care to pregnant women.
(6) Signzy: An onboarding solution ensuring digital trust using AI & Blockchain to provide smart e-verification and risk prediction.
(7) SlangLabs : Software platform for building multilingual voice interfaces for mobile apps, which enable app interactions via voice in addition to touch.
(8) ten3THealthcare: An early detection system for preventable clinical adverse events by monitoring real time, remote, and continuous vital signs of patients.
(9) Uncanny Vision: Focused on Vehicle Analytics, People Analytics and Object Analytics using high accuracy, AI-based Deep Learning models which are optimised to perform on low compute edge devices.
(10) Vassar Labs: Using sensors, satellite & crowd sourced data to provide decision-making support in sectors such as Water, Agriculture, Education & more.

By Paul Ravindranath, Program Manager, Launchpad Accelerator India

Meet your bilingual Google Assistant, and get help with Routines


Your Google Assistant now understands and speaks more than one language at a time – and is becoming more useful with custom and scheduled routines. 


With the Google Assistant, it’s easy to get stuff done through a simple conversation – whether you’re looking for a delicious pumpkin soup recipe, setting a reminder to take the laundry off the line, or playing your favorite tunes. And starting today, the Assistant will become more helpful with two new capabilities: we’re adding multilingual support, so that the Assistant will be able to understand and speak more than one language at a time. Additionally, you can now set custom and scheduled routines on smartphones and speakers, making it easier to get things done quickly with your Assistant.


Talk to the Google Assistant in multiple languages

Family members in bilingual homes often switch back and forth between languages, and now the Assistant can keep up. With our advancement in speech recognition, you can now speak two languages interchangeably with the Assistant on smart speakers and phones and the Assistant will respond in kind. This is a first-of-its-kind feature only available on the Assistant and is part of our multi-year effort to make your conversations with the Assistant more natural. 

If you speak German at home,  you can ask, “Hey Google, wie ist das Wetter heute?” And if you’re at a house party with friends, you can switch to English and say “Hey Google, play my BBQ playlist.” Currently, the Assistant can understand any pair of languages within English, German, French, Spanish, Italian, and Japanese. We’ll be expanding to more languages in the coming months. 



Get help with your routines


Your Assistant will now be able to help you manage your daily routines and get multiple things done with a single command. We’ve put together six Routines that help with your morning, commutes to and from work, and evening at home. For example, say “Hey Google, I’m home” and the Assistant on your Google Home or phone can turn on the lights, share any home reminders, play your favorite music and more, all with just four words. 


We’re also rolling out Custom Routines in Australia, which allow you to create your own Routine with any of the Google Assistant’s one million Actions, and start your routine with a phrase that feels best for you. For example, you can create a Custom Routine for family dinner, and kick it off by saying "Hey Google, dinner's ready" and the Assistant can turn on your favorite music, turn off the TV, and broadcast “dinner time!” to everyone in the house. And on Google Home devices, you can now schedule Custom Routines for a specific day or time through the settings of your Google Assistant. So if you’re keen to get back into a regular exercise routine, you can set up your workout routine to automatically kick off several times per week.

We hope that these new features will make it even easier for you to get things done – even in multiple languages.


Posted by Manuel Bronstein, VP of Product, Google Assistant

Introducing the Tink cryptographic software library

Cross-posted on the Google Security Blog

At Google, many product teams use cryptographic techniques to protect user data. In cryptography, subtle mistakes can have serious consequences, and understanding how to implement cryptography correctly requires digesting decades' worth of academic literature. Needless to say, many developers don’t have time for that.

To help our developers ship secure cryptographic code we’ve developed Tink—a multi-language, cross-platform cryptographic library. We believe in open source and want Tink to become a community project—thus Tink has been available on GitHub since the early days of the project, and it has already attracted several external contributors. At Google, Tink is already being used to secure data of many products such as AdMob, Google Pay, Google Assistant, Firebase, the Android Search App, etc. After nearly two years of development, today we’re excited to announce Tink 1.2.0, the first version that supports cloud, Android, iOS, and more!

Tink aims to provide cryptographic APIs that are secure, easy to use correctly, and hard(er) to misuse. Tink is built on top of existing libraries such as BoringSSL and Java Cryptography Architecture, but includes countermeasures to many weaknesses in these libraries, which were discovered by Project Wycheproof, another project from our team.

With Tink, many common cryptographic operations such as data encryption, digital signatures, etc. can be done with only a few lines of code. Here is an example of encrypting and decrypting with our AEAD interface in Java:
 import com.google.crypto.tink.Aead;
import com.google.crypto.tink.KeysetHandle;
import com.google.crypto.tink.aead.AeadFactory;
import com.google.crypto.tink.aead.AeadKeyTemplates;
// 1. Generate the key material.
KeysetHandle keysetHandle = KeysetHandle.generateNew(
AeadKeyTemplates.AES256_EAX);
// 2. Get the primitive.
Aead aead = AeadFactory.getPrimitive(keysetHandle);
// 3. Use the primitive.
byte[] plaintext = ...;
byte[] additionalData = ...;
byte[] ciphertext = aead.encrypt(plaintext, additionalData);
Tink aims to eliminate as many potential misuses as possible. For example, if the underlying encryption mode requires nonces and nonce reuse makes it insecure, then Tink does not allow the user to pass nonces. Interfaces have security guarantees that must be satisfied by each primitive implementing the interface. This may exclude some encryption modes. Rather than adding them to existing interfaces and weakening the guarantees of the interface, it is possible to add new interfaces and describe the security guarantees appropriately.

We’re cryptographers and security engineers working to improve Google’s product security, so we built Tink to make our job easier. Tink shows the claimed security properties (e.g., safe against chosen-ciphertext attacks) right in the interfaces, allowing security auditors and automated tools to quickly discover usages where the security guarantees don’t match the security requirements. Tink also isolates APIs for potentially dangerous operations (e.g., loading cleartext keys from disk), which allows discovering, restricting, monitoring and logging their usage.

Tink provides support for key management, including key rotation and phasing out deprecated ciphers. For example, if a cryptographic primitive is found to be broken, you can switch to a different primitive by rotating keys, without changing or recompiling code.

Tink is also extensible by design: it is easy to add a custom cryptographic scheme or an in-house key management system so that it works seamlessly with other parts of Tink. No part of Tink is hard to replace or remove. All components are composable, and can be selected and assembled in various combinations. For example, if you need only digital signatures, you can exclude symmetric key encryption components to minimize code size in your application.

To get started, please check out our HOW-TO for Java, C++ and Obj-C. If you'd like to talk to the developers or get notified about project updates, you may want to subscribe to our mailing list. To join, simply send an empty email to tink-users+subscribe@googlegroups.com. You can also post your questions to StackOverflow, just remember to tag them with tink.

We’re excited to share this with the community, and welcome your feedback!

By Thai Duong, Information Security Engineer, on behalf of Tink team

Making embedded Google Forms better

We’re making Google Forms look and work better when they’re embedded in websites created with Google Sites. These improvements will also mean Forms will work better embedded in websites not managed in Google Sites as well.

Our users embed forms in sites for all sorts of reasons, from collecting customer feedback to capturing new project ideas to gathering and sharing survey data and much more. User feedback told us how important it was that these embedded forms look and work great, especially when embedded in Google Sites.

So we’re making a range of improvements to make embedded forms more useful, including:


  • Improved looks, including a background that blends into the rest of the website 
  • Better suggested height and width when embedding a form on Google Sites 
  • More intelligent use of space in embedded forms 
  • More adaptive viewing on different devices (e.g. mobile vs. desktop) 
  • Easier to view & enter information into the embedded form 



Launch Details 
Release track:
Launching to both Rapid Release and Scheduled Release

Editions: 
Available to all G Suite editions

Rollout pace: 
Full rollout (1–3 days for feature visibility)

Impact:
All end users

Action: 
Change management suggested/FYI

More Information 
Help Center: Send your form to people 

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Teaching the Google Assistant to be Multilingual



Multilingual households are becoming increasingly common, with several sources [1][2][3] indicating that multilingual speakers already outnumber monolingual counterparts, and that this number will continue to grow. With this large and increasing population of multilingual users, it is more important than ever that Google develop products that can support multiple languages simultaneously to better serve our users.

Today, we’re launching multilingual support for the Google Assistant, which enables users to jump between two different languages across queries, without having to go back to their language settings. Once users select two of the supported languages, English, Spanish, French, German, Italian and Japanese, from there on out they can speak to the Assistant in either language and the Assistant will respond in kind. Previously, users had to choose a single language setting for the Assistant, changing their settings each time they wanted to use another language, but now, it’s a simple, hands-free experience for multilingual households.
The Google Assistant is now able to identify the language, interpret the query and provide a response using the right language without the user having to touch the Assistant settings.
Getting this to work, however, was not a simple feat. In fact, this was a multi-year effort that involved solving a lot of challenging problems. In the end, we broke the problem down into three discrete parts: Identifying Multiple Languages, Understanding Multiple Languages and Optimizing Multilingual Recognition for Google Assistant users.

Identifying Multiple Languages
People have the ability to recognize when someone is speaking another language, even if they do not speak the language themselves, just by paying attention to the acoustics of the speech (intonation, phonetic registry, etc). However, defining a computational framework for automatic spoken language recognition is challenging, even with the help of full automatic speech recognition systems1. In 2013, Google started working on spoken language identification (LangID) technology using deep neural networks [4][5]. Today, our state-of-the-art LangID models can distinguish between pairs of languages in over 2000 alternative language pairs using recurrent neural networks, a family of neural networks which are particularly successful for sequence modeling problems, such as those in speech recognition, voice detection, speaker recognition and others. One of the challenges we ran into was working with larger sets of audio — getting models that can automatically understanding multiple languages at scale, and hitting a quality standard that allowed those models to work properly.

Understanding Multiple Languages
To understand more than one language at once, multiple processes need to be run in parallel, each producing incremental results, allowing the Assistant not only to identify the language in which the query is spoken but also to parse the query to create an actionable command. For example, even for a monolingual environment, if a user asks to “set an alarm for 6pm”, the Google Assistant must understand that "set an alarm" implies opening the clock app, fulfilling the explicit parameter of “6pm” and additionally make the inference that the alarm should be set for today. To make this work for any given pair of supported languages is a challenge, as the Assistant executes the same work it does for the monolingual case, but now must additionally enable LangID, and not just one but two monolingual speech recognition systems simultaneously (we’ll explain more about the current two language limitation later in this post).

Importantly, the Google Assistant and other services that are referenced in the user’s query asynchronously generate real-time incremental results that need to be evaluated in a matter of milliseconds. This is accomplished with the help of an additional algorithm that ranks the transcription hypotheses provided by each of the two speech recognition systems using the probabilities of the candidate languages produced by LangID, our confidence on the transcription and the user’s preferences (such as favorite artists, for example).
Schematic of our multilingual speech recognition system used by the Google Assistant versus the standard monolingual speech recognition system. A ranking algorithm is used to select the best recognition hypotheses from two monolingual speech recognizer using relevant information about the user and the incremental langID results.
When the user stops speaking, the model has not only determined what language was being spoken, but also what was said. Of course, this process requires a sophisticated architecture that comes with an increased processing cost and the possibility of introducing unnecessary latency.

Optimizing Multilingual Recognition
To minimize these undesirable effects, the faster the system can make a decision about which language is being spoken, the better. If the system becomes certain of the language being spoken before the user finishes a query, then it will stop running the user’s speech through the losing recognizer and discard the losing hypothesis, thus lowering the processing cost and reducing any potential latency. With this in mind, we saw several ways of optimizing the system.

One use case we considered was that people normally use the same language throughout their query (which is also the language users generally want to hear back from the Assistant), with the exception of asking about entities with names in different languages. This means that, in most cases, focusing on the first part of the query allows the Assistant to make a preliminary guess of the language being spoken, even in sentences containing entities in a different language. With this early identification, the task is simplified by switching to a single monolingual speech recognizer, as we do for monolingual queries. Making a quick decision about how and when to commit to a single language, however, requires a final technological twist: specifically, we use a random forest technique that combines multiple contextual signals, such as the type of device being used, the number of speech hypotheses found, how often we receive similar hypotheses, the uncertainty of the individual speech recognizers, and how frequently each language is used.

An additional way we simplified and improved the quality of the system was to limit the list of candidate languages users can select. Users can choose two languages out of the six that our Home devices currently support, which will allow us to support the majority of our multilingual speakers. As we continue to improve our technology, however, we hope to tackle trilingual support next, knowing that this will further enhance the experience of our growing user base.

Bilingual to Trilingual
From the beginning, our goal has been to make the Assistant naturally conversational for all users. Multilingual support has been a highly-requested feature, and it’s something our team set its sights on years ago. But there aren’t just a lot of bilingual speakers around the globe today, we also want to make life a little easier for trilingual users, or families that live in homes where more than two languages are spoken.

With today’s update, we’re on the right track, and it was made possible by our advanced machine learning, our speech and language recognition technologies, and our team’s commitment to refine our LangID model. We’re now working to teach the Google Assistant how to process more than two languages simultaneously, and are working to add more supported languages in the future — stay tuned!


1 It is typically acknowledged that spoken language recognition is remarkably more challenging than text-based language identification where, relatively simple techniques based on dictionaries can do a good job. The time/frequency patterns of spoken words are difficult to compare, spoken words can be more difficult to delimit as they can be spoken without pause and at different paces and microphones may record background noise in addition to speech.

Source: Google AI Blog


Step aboard Discovery with virtual reality

Editor’s note: On the anniversary of the first launch of the Space Shuttle Discovery, we’ll hear from Dr. Ellen R. Stofan, planetary geologist and the John and Adrienne Mars Director of the Smithsonian National Air and Space Museum, about a new 360 film on board the Shuttle that launched the Hubble Space Telescope.

Since the dawn of spaceflight, only a few hundred people have experienced space firsthand. But since the beginning, there have been moments that captured the world’s imagination and challenged our collective Earth-bound perspective. Of the many orbital endeavors that have made headlines through the decades, one of the most enduring and prolific has been the Hubble Space Telescope.

The Hubble has been called one of the most important single scientific instruments of all time. The data it collected has deepened our understanding of the natural world—from the edge of our solar system to the age of the universe—and the images it has returned have brought the startling beauty of the cosmos to people around the world.

Today, on the 34th anniversary of the Space Shuttle Discovery’s maiden voyage, the Smithsonian’s National Air and Space Museum and Google Arts & Culture have teamed up to bring visitors into the orbiter like never before. Two of the astronauts who helped deliver Hubble to orbit as part of STS-31—Maj Gen Charlie Bolden  and Dr. Kathy Sullivan—take us on a 360 journey inside Discovery at the Museum’s Steven F. Udvar-Hazy Center.

Inside Space Shuttle Discovery 360 | National Air and Space Museum

The video was captured using Google’s Halo camera, and takes us along with the astronauts as they climb aboard the spacecraft together for the first time in 28 years. Charlie and Kathy show us what life in space was like from dawn (they saw 16 sunrises and sunsets each day) to dinnertime (sometimes eaten on the ceiling), and relive the moment they deployed Hubble after years of planning and training.

STS-31 is just one great example of why Discovery was called the champion of the Shuttle fleet—and why it is now on display as part of the Smithsonian’s national collection. Discovery flew every kind of mission the Space Shuttle was designed to fly, from Hubble’s deployment to the delivery and assembly of International Space Station modules and more. Today, we’re celebrating the orbiter’s 39 missions and 365 total days in space with this special immersive film, 15 digital exhibits, virtual tours, and over 200 online artifacts.

As we enter a new era of spaceflight in the years ahead—with NASA’s Commercial Crew Program and the development of Hubble’s successor, the James Webb Space Telescope—I hope this new collection demonstrates the remarkable progress we’ve made toward unlocking the mysteries of the universe, and how much farther we can go together. Explore the magic of Discovery Space Shuttle on Google Arts & Culture

Meet the bilingual Google Assistant with new smart home devices

This summer, we’ve brought the Google Assistant to more devices across Europe and the rest of the world to help you get answers and get things done in more languages (most recently supporting Spanish, Swedish and Dutch).

At IFA 2018, we’re adding multilingual support, so that the Assistant will be able to understand and speak more than one language at a time. Additionally, we’ll be introducing new phones and a broad range of devices and appliances for the home that support the Assistant from our growing ecosystem of partners in Europe.

Talk to the Google Assistant in multiple languages

Family members in bilingual homes often switch back and forth between languages, and now the Assistant can keep up. With our advancement in speech recognition, you can now speak two languages interchangeably with the Assistant on smart speakers and phones and the Assistant will respond in kind. This is a first-of-its-kind feature only available on the Assistant and is part of our multi-year effort to make your conversations with the Assistant more natural.

If you’re looking for an answer in English, ask, “Hey Google, what’s the weather like today?” If you’re craving tunes from your favorite German hip hop band, just ask “Hey Google, spiele die Fantastischen Vier.” Currently, the Assistant can understand any pair of languages within English, German, French, Spanish, Italian, and Japanese. We’ll be expanding to more languages in the coming months.

Your bilingual Google Assistant

A fully connected home

Enjoying home entertainment
Listening to music is one of the most popular ways people use the Assistant. That’s why we built the Google Home Max to offer high-fidelity and balanced sound and now it's available in Germany, UK and France—Google Home Max will hit store shelves starting today.

This week, we’re also announcing that the Assistant will be built into new voice-activated speakers, including the Bang & Olufsen’s Beosound 1 and Beosound 2, Blaupunkt’s PVA 100, Harman Kardon’s HK Citation series, Kygo’s Speaker B9-800, Polaroid’s Sam and Buddy and Marshall Acton II and Stanmore II. Expect these smart speakers and soundbars to roll out later this year in local European markets.

Getting things done in the kitchen
On the heels of introducing our first ever Smart Displays last month with Lenovo, we’re expanding our offerings with the upcoming launch of JBL’s Link View and LG XBOOM AI ThinQ WK9 in the coming weeks. With these new Smart Displays, you’ll have the perfect kitchen companion. You can use your voice and tap or swipe the screen to follow along with a recipe, control your smart home, watch live TV on YouTube TV, and make video calls with Google Duo. Smart Displays also come integrated with all your favorite Google products services like Google Calendar, Google Maps, Google Photos and YouTube.

Controlling all connected devices in your home
The Assistant is also making your home even smarter. Just in the past year, there are now triple the number of home devices and appliances that work with the Assistant in Europe from all the major local brands you’re familiar with.

Our partners will be releasing more devices that work with the Assistant throughout the home in the coming months, including:

  • Thermostats: tado° Smart Thermostat and Smart Radiator Thermostat, Homematic IP Radiator Thermostate
  • Security and Smart Home Hubs: Netatmo’s Smart Indoor and Outdoor  Security Cameras, TP-Link’s Kasa Cam KC120 and Kasa Cam Outdoor KC200, Smanos K1 SmartHome DIY Security Kit, and Somfy’ TaHoma smart home hub
  • Lighting: FIBARO Switch, MEDION RGB LED bulb and stripe, and the Nanoleaf Light Panels
  • Appliances: Electrolux’s smart ovens, iRobot® Roomba® 980, 896 and 676 vacuums

Whether you speak German, French, English, Italian, Spanish, you’ll be able to set the temperature, lock the doors, dim the lights and more from a smart speaker and smartphone. 

Smart Home

On the go with your phone and headphones

The Google Assistant is expanding on more Android phones and headphones, helping you when you're on the go. Some of the latest flagship devices, including the LG G7 One, SHARP Simple Smartphone 4 and Vivo NEX S, now feature dedicated buttons to easily access the Assistant. In addition, the new Xperia XZ3 from Sony and Blackberry Key 2 LE also take advantage of the shortcuts to trigger the Assistant.

And this week we're announcing that over the coming year, more headphones are on the way, including the JBL Everest GA and LG Tone Platinum, and Earin M-2. When you pair them to your phone, you can talk to the Assistant instantly with just a touch, whether you want to skip a track to hear the next song, get notifications, and respond to your messages, or set reminders.

Phew, that was a lot of news. With lots of new devices and partners coming to Europe, the Google Assistant will be available to help you through every step of your day.