Finding a place to charge your EV is easy with Google Maps

If you’ve ever driven to an electric vehicle (EV) charging station only to find that all ports are occupied, you know that you could end up waiting in line for anywhere between minutes to hours—which can really put a damper on your day when you have places to go and things to do.


Starting today, you can see the real time availability of charging ports in the U.S. and U.K, right from Google Maps–so you can know if chargers are available before you head to a station. Simply search for “ev charging stations” to see up to date information from networks like Chargemaster, EVgo, SemaConnect and soon, Chargepoint. You’ll then see how many ports are currently available, along with other helpful details, like the business where the station is located, port types and charging speeds. You’ll also see information about the station from other drivers, including photos, ratings, reviews and questions.
realtime

You can search for real time EV charging information on Google Maps on desktop, Android, iOS and on Google Maps for Android Auto. To get started, update your Google Maps app from the App Store or Play Store.


Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 74 (74.0.3729.108) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Improving our mobile layout

We are pleased to announce a new mobile layout that should provide an improved experience for mobile device users. We’ve changed the look of the search box and refinements, increased the size of the thumbnails, and simplified the pagination.
Before                                                                                  After
Most of these changes only affect mobile devices, but the refinements have also been updated for desktop.
Before                                                                                  After
The mobile-specific changes can be optionally disabled by setting the "mobileLayout" attribute of the search element to “disabled”.

Beta Channel Update for Desktop

The beta channel has been updated to 74.0.3729.108 for Windows, Mac, and, Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Abdul Syed
Google Chrome

Announcing Google Ads API Doctor

We have heard from users that correctly configuring a client library and provisioning OAuth2 credentials can be challenging, so today we are introducing Google Ads API Doctor, a new tool that will analyze your client library environment. The program will:
  • Verify that your OAuith2 credentials are correctly configured and ready to make API calls.
  • Guide you through fixing any OAuth2 problems it detects and verify the corrected configuration.
The initial version of this tool will help you analyze and fix issues related to OAuth2 configuration, including the following common issues:
  • Invalid refresh token: The program will identify this and guide you through the process to obtain a valid token, back up your configuration file, and write the new value to your active configuration file.
  • Permission denied: There are several OAuth errors that sound similar, such as user permission denied and permission denied. The program identifies that in the first case it is caused by an invalid refresh token and in the second it’s because the Google Ads API is disabled in the Google API Console.
If you want to send the output to support, you can run your scenario with the PII flag to hide your Personally Identifiable Information (PII) and copy the screen output. To gather even more information, you can use the verbose flag to see the low-level JSON that is returned.

We are releasing this project as open source per Google’s open source initiative, and we encourage contributions. See contributing to Google open source to learn more about how to contribute to this project. As always, share your feedback on the Google Ads API forum.

Steps toward a more sustainable future

People perform trillions of searches on Google each year, upload hundreds of hours of videos to YouTube each minute, and receive more than 120 billion emails every week. Making all of these Google services work for everyone requires a lot of behind-the-scenes work, like operating a global network of data centers around the clock and manufacturing products for people around the world.

It’s not only our responsibility to build products and services that are fast and reliable for everyone, but also to make sure we do so with minimal impact to our planet. So this Earth Day, we’re taking inventory of the progress we've made when it comes to sustainability and where we plan to do more.

We’ve scaled up our use of renewable energy.  

  • In 2017, we hit a goal that we set five years earlier and matched 100 percent of the electricity consumption of our operations with purchases of renewable energy. This means that for each unit of energy we used that year, we purchased an equivalent unit of energy from a renewable source, such as wind or solar.

  • When we buy renewable energy, we only do so from projects that are constructed for Google. This helps us bring on new clean energy supply to the grids where we operate our facilities.

  • Today, a Google data center uses 50 percent less energy than a typical data center, while delivering seven times more computing power than we did five years ago.

  • We use AI to help safely run our data center cooling systems—already this has resulted in 30 percent energy savings.

  • We’re weaving circularity into our operations.  In our data centers, we use components from old servers to upgrade machines and build remanufactured machines with refurbished parts.

We build products and services that help others become a part of the solution.  

  • To date, Nest Thermostats have helped people save a total of more than 35 billion kilowatt hours of energy—that’s enough energy to power the city of San Francisco for three years.

  • Researchers and policy makers use our Google Geo platforms to better take care of our planet. Product like Google Earth Engine help people combat overfishingmonitor forest change and protect the freshwater supply.

  • Businesses that switch from locally hosted solutions to G Suite have reported reductions in IT energy use and carbon emissions up to 85 percent.

  • Organizations that move IT infrastructure and collaboration applications, like Gmail and Google Docs, from a self managed data center or colocation facility to Google Cloud reduce the net carbon emissions of their computing to zero.

Our sustainability work isn’t over. When we think long term, we’re working toward directly sourcing carbon-free energy for our operations-—24 hours a day, 7 days a week—in all the places we operate. Already, we’re working with governments and utility companies to chart a course toward making a 24x7 carbon-free grid  a reality so more companies and people can decrease their carbon footprint. We know that it is the right path forward, and we have just begun.  

Along the way we’ll continue to find more ways to protect our planet with our sustainability efforts. Follow along with us in this collection that we’ll be updating all week long in celebration of Earth Day.

SpecAugment: A New Data Augmentation Method for Automatic Speech Recognition



Automatic Speech Recognition (ASR), the process of taking an audio input and transcribing it to text, has benefited greatly from the ongoing development of deep neural networks. As a result, ASR has become ubiquitous in many modern devices and products, such as Google Assistant, Google Home and YouTube. Nevertheless, there remain many important challenges in developing deep learning-based ASR systems. One such challenge is that ASR models, which have many parameters, tend to overfit the training data and have a hard time generalizing to unseen data when the training set is not extensive enough.

In the absence of an adequate volume of training data, it is possible to increase the effective size of existing data through the process of data augmentation, which has contributed to significantly improving the performance of deep networks in the domain of image classification. In the case of speech recognition, augmentation traditionally involves deforming the audio waveform used for training in some fashion (e.g., by speeding it up or slowing it down), or adding background noise. This has the effect of making the dataset effectively larger, as multiple augmented versions of a single input is fed into the network over the course of training, and also helps the network become robust by forcing it to learn relevant features. However, existing conventional methods of augmenting audio input introduces additional computational cost and sometimes requires additional data.

In our recent paper, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition”, we take a new approach to augmenting audio data, treating it as a visual problem rather than an audio one. Instead of augmenting the input audio waveform as is traditionally done, SpecAugment applies an augmentation policy directly to the audio spectrogram (i.e., an image representation of the waveform). This method is simple, computationally cheap to apply, and does not require additional data. It is also surprisingly effective in improving the performance of ASR networks, demonstrating state-of-the-art performance on the ASR tasks LibriSpeech 960h and Switchboard 300h.

SpecAugment
In traditional ASR, the audio waveform is typically encoded as a visual representation, such as a spectrogram, before being input as training data for the network. Augmentation of training data is normally applied to the waveform audio before it is converted into the spectrogram, such that after every iteration, new spectrograms must be generated. In our approach, we investigate the approach of augmenting the spectrogram itself, rather than the waveform data. Since the augmentation is applied directly to the input features of the network, it can be run online during training without significantly impacting training speed.
A waveform is typically converted into a visual representation (in our case, a log mel spectrogram; steps 1 through 3 of this article) before being fed into a network.
SpecAugment modifies the spectrogram by warping it in the time direction, masking blocks of consecutive frequency channels, and masking blocks of utterances in time. These augmentations have been chosen to help the network to be robust against deformations in the time direction, partial loss of frequency information and partial loss of small segments of speech of the input. An example of such an augmentation policy is displayed below.
The log mel spectrogram is augmented by warping in the time direction, and masking (multiple) blocks of consecutive time steps (vertical masks) and mel frequency channels (horizontal masks). The masked portion of the spectrogram is displayed in purple for emphasis.
To test SpecAugment, we performed some experiments with the LibriSpeech dataset, where we took three Listen Attend and Spell (LAS) networks, end-to-end networks commonly used for speech recognition, and compared the test performance between networks trained with and without augmentation. The performance of an ASR network is measured by the Word Error Rate (WER) of the transcript produced by the network against the target transcript. Here, all hyperparameters were kept the same, and only the data fed into the network was altered. We found that SpecAugment improves network performance without any additional adjustments to the network or training parameters.
Performance of networks on the test sets of LibriSpeech with and without augmentation. The LibriSpeech test set is divided into two portions, test-clean and test-other, the latter of which contains noisier audio data.
More importantly, SpecAugment prevents the network from over-fitting by giving it deliberately corrupted data. As an example of this, below we show how the WER for the training set and the development (or dev) set evolves through training with and without augmentation. We see that without augmentation, the network achieves near-perfect performance on the training set, while grossly under-performing on both the clean and noisy dev set. On the other hand, with augmentation, the network struggles to perform as well on the training set, but actually shows better performance on the clean dev set, and shows comparable performance on the noisy dev set. This suggests that the network is no longer over-fitting the training data, and that improving training performance would lead to better test performance.
Training, clean (dev-clean) and noisy (dev-other) development set performance with and without augmentation.
State-of-the-Art Results
We can now focus on improving training performance, which can be done by adding more capacity to the networks by making them larger. By doing this in conjunction with increasing training time, we were able to get state-of-the-art (SOTA) results on the tasks LibriSpeech 960h and Switchboard 300h.
Word error rates (%) for state-of-the-art results for the tasks LibriSpeech 960h and Switchboard 300h. The test set for both tasks have a clean (clean/Switchboard) and a noisy (other/CallHome) subset. Previous SOTA results taken from Li et. al (2019), Yang et. al (2018) and Zeyer et. al (2018).
The simple augmentation scheme we have used is surprisingly powerful—we are able to improve the performance of the end-to-end LAS networks so much that it surpasses those of classical ASR models, which traditionally did much better on smaller academic datasets such as LibriSpeech or Switchboard.
Performance of various classes of networks on LibriSpeech and Switchboard tasks. The performance of LAS models is compared to classical (e.g., HMM) and other end-to-end models (e.g., CTC/ASG) over time.
Language Models
Language models (LMs), which are trained on a bigger corpus of text-only data, have played a significant role in improving the performance of an ASR network by leveraging information learned from text. However, LMs typically need to be trained separately from the ASR network, and can be very large in memory, making it hard to fit on a small device, such as a phone. An unexpected outcome of our research was that models trained with SpecAugment out-performed all prior methods even without the aid of a language model. While our networks still benefit from adding an LM, our results are encouraging in that it suggests the possibility of training networks that can be used for practical purposes without the aid of an LM.
Word error rates for LibriSpeech and Switchboard tasks with and without LMs. SpecAugment outperforms previous state-of-the-art even before the inclusion of a language model.
Most of the work on ASR in the past has been focused on looking for better networks to train. Our work demonstrates that looking for better ways to train networks is a promising alternative direction of research.

Acknowledgements
We would like to thank the co-authors of our paper Chung-Cheng Chiu, Ekin Dogus Cubuk, Quoc Le, Yu Zhang and Barret Zoph. We also thank Yuan Cao, Ciprian Chelba, Kazuki Irie, Ye Jia, Anjuli Kannan, Patrick Nguyen, Vijay Peddinti, Rohit Prabhavalkar, Yonghui Wu and Shuyuan Zhang for useful discussions.

Source: Google AI Blog


Being a mom is hard work. Becoming one is, too.

For seven years, Mother’s Day was the worst day of the year for me. It was an observance that felt completely out of reach, yet commercially and socially it was a reminder that I couldn't escape. I wanted to be a mom, but I was having trouble becoming one. For my husband and I, the inner walls of our bedroom became clinical, timed and invaded by fertility specialists. The outside world didn’t understand what we were going through—they saw us as a couple who decided to "take their time" to start a family. I began doing my own research and found out that 1 in 8 women in America are struggling, too. There are over 7 million of us who want a child but have a disease or other barrier that stands in our way.

Using Google and YouTube, I found support groups, blogs and resources. I wasn’t as alone as I thought—like many, I had been silent about my struggles with infertility. It’s a less-than-tasty casserole of heartache, injections and surgeries, failed adoption placements and financial devastation.

So I learned how to be my own advocate. I’ve spoken out, written articles and—most recently—lent my voice to the video above to raise awareness about the barriers to building a family. I want to better educate people on how to support their friends and family who are struggling with infertility.

As today marks the start of National Infertility Awareness Week, I—along with the other brave women in this video—am dedicated to sparking a bigger conversation, and overcoming the stigmas and barriers that surround infertility. I'm excited Google is using its platform to help put this message out into the world ahead of Mother's Day. I hope that this year, even one more person out there will realize they’re not alone.

Go green with your Google Assistant

It can be hard to know how to chip in and make a difference to protect the environment. You can recycle, take shorter showers, or carpool to work—and now you can lower your carbon footprint just by asking your Google Assistant.

With new advancements in smart home technology, it’s actually pretty easy to incorporate energy and water-saving actions into your daily routine (and save some money while you’re at it). This Earth Day, we’re sharing a few ways the Assistant can help make your home more environmentally-friendly.

Simple ways to save energy and automate

  1. Switch to LEDs. Swapping out just five incandescent bulbs with LED lights can save you up to $75 per year—plus, LEDs also last up to 50 times longer than incandescents, with a total life of at least 35,000 hours. Even better, pairing ENERGY STAR-certified smart bulbs like Philips Hue with the Assistant can help you control the lights with just your voice, or set lighting schedules to use electricity only when you need it.

  2. Choose ENERGY STAR certified appliances. Did you know that appliances contribute to a quarter of your home’s energy use? To optimize how that energy is used, choose an ENERGY STAR-certified brand like LG, GE Appliances, Samsung or Whirlpool, and connect it with Google Assistant to easily control appliances like refrigerators, dishwashers, ovens and air purifiers. Certain window air conditioning units and ceiling fans also work with the Google Assistant: Just say, “Hey Google, turn off the fan” to a Haiku fan as you leave a room or schedule your LG, Midea or Toshiba AC to turn off at the same time each day.

  3. Upgrade your thermostat. Many utilities offer rebates on smart thermostats because they make saving energy easy. Smart thermostats like the Nest Learning Thermostat can save an average of $131 to $145 a year (of course, individual savings are not guaranteed). That’s because Nest thermostats make smart, automatic temperature adjustments to save energy based on your habits. And you can even say “Hey Google, set the thermostat to eco mode” to make your home even more efficient.

  4. Monitor and protect from leaks. According to the EPA, the average family can lose 9,400 gallons of water annually from household leaks alone. To curb this waste, you can use leak detectors like LeakSmart, or install Flo by Moen to immediately get notifications if pipes leak, and use the Assistant to shut off the water.

  5. Curb your outdoor water use: You can still keep your lush lawn looking beautiful while using less water. Smart sprinkler systems like the Rachio 3 Smart Sprinkler Controller reduce water usage and now work with the Assistant, so you can easily control and monitor these systems with simple voice commands. As part of your Routine, you can also set the sprinklers for early morning or at night to prevent evaporation. We’re also adding support for Rain Bird’s family of Irrigation Controllers in the coming weeks.

How to set up everything with the Assistant

Download the Google Assistant or Google Home app and then click “Add device.” You can get started right away with commands like “Hey Google, turn down the temperature” with your Nest Thermostat. Or set up quick Routines that can help you automate energy savings by controlling multiple devices with a single command.

Our commitment to supporting families and the environment  

There are lots of changes we can make as individuals to combat climate change, but we're taking steps as a company to reduce energy in U.S. households, too. ThePower Project is our pledge to bring one million Nest thermostats to low income families by 2023. Along with a coalition of partners—nonprofits like Habitat for Humanity and the National Housing Trust and energy companies like Georgia Power— we’ve installed Nest thermostats in homes over the last year to help families reduce their energy costs. This year, the Power Project is expanding to include partners Philips Hue and Whirlpool. Along with Nest, they’ll donate thousands of energy-saving technology products to Habitat for Humanity in the coming year. You can join us in providing energy-saving technology to those who need it most in your community by donating to nonprofits at nest.com/powerproject.

Making consistent changes to reduce energy consumption in our day-to-day lives is the key to long-term conservation; even the smallest changes add up to measurable impact. With Google Assistant and the right energy-saving technology, these changes are easier to make than ever.