Monthly Archives: January 2019

Dev Channel Update for Desktop

The dev channel has been updated to 73.0.3683.10 for Windows, Mac & Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Abdul Syed
Google Chrome

New Required Header for Google Ads API Requests

Starting February 11, 2019 if you are making Google Ads API requests using OAuth credentials from a manager account and are accessing a related customer account, you will need an additional HTTP request header/grpc metadata field. You will need to set the login-customer-id header to the customer ID of the manager account, removing any hyphens in the ID.

Setting this new header is essentially equivalent to choosing an account in the Google Ads UI after signing in, and it’s configurable in the client libraries. Each library has instructions in its README that explain how to set a login-customer-id in your local configuration. Here are links to all relevant sets of instructions in each library: If this header is not set, you may begin seeing the error: USER_PERMISSION_DENIED.

For more details on request headers, including login-customer-id, see our documentation.

If you have questions, please reach out to us on the Google Ads API forum.

Dedicated Hangouts Meet IP addresses

What’s changing

We’re adding a range of official, fixed IP addresses to be used exclusively for classic Hangouts and Hangouts Meet in G Suite domains. That means that you can identify video conference traffic used in G Suite and deprioritize Hangouts traffic from consumer accounts. This can help you better configure and optimize network and firewall access.

Who’s impacted

Admins and network operators only

Why you’d use it

Hangouts Meet IP addresses allow you to recognize the G Suite video conference traffic. Use the IPs to:

  • Open Meet’s TCP and UDP ports for Meet IPs
  • Avoid tunneling or DPI for Meet IPs
  • Reduce latency by providing the shortest path possible to the internet for Meet traffic

How to get started



Additional information

Hangouts Meet and classic Hangouts will stop using the old IP address on February 14, 2019. As this change might interfere with previous network optimization you might have set up, we recommend adopting these IP addresses as part of your firewall and network configuration.

Helpful links


Availability

Rollout details

  • Rapid Release domains: Full rollout (1–3 days for feature visibility) starting on February 14, 2019
  • Scheduled Release domains: Full rollout (1–3 days for feature visibility) starting on February 14, 2019

G Suite editions
Available to all G Suite editions

On/off by default?
This feature will be ON by default.

Stay up to date with G Suite launches

Reach customers via additional marketing channels with Analytics 360 + Salesforce Marketing Cloud

Last year, we shared our plan to make Google Analytics 360 audiences available for activation in Salesforce Marketing Cloud, so that marketers can deliver more timely and relevant messages via additional marketing channels like email and SMS. Today, we’re excited to share that this capability will become available to customers in the next two weeks.

Use the power of Analytics 360 audiences in Marketing Cloud campaigns

With over 250 site engagement dimensions and metrics available in Analytics 360, marketers can create precise audiences to deliver more relevant messages to their customers. By sharing Analytics 360 audiences within Google Marketing Platform, marketers can deliver personalized search and display ads and customized site experiences. Now, by sharing Analytics 360 audiences to Marketing Cloud, marketers can use the insights from Analytics 360 to customize their Marketing Cloud campaigns — extending the reach of Analytics 360 audiences to email, SMS, and push notifications.


Let’s say you want to re-engage valuable customers who have just visited your site for the first time in a while. Now you can reach them directly by email soon after they leave your site. Simply create an audience of these users in Analytics 360 and share it with Marketing Cloud. Then, you can reach them in a Marketing Cloud email campaign that includes a special promotion to bring them back to your site and purchase.


Create an Analytics 360 audience and share to Marketing Cloud.

Create an Analytics 360 audience and share to Marketing Cloud.

Get complete campaign reporting across channels

In addition to sharing Analytics 360 audiences to Marketing Cloud, marketers already have access to another important capability through this integration: deeper campaign reporting.

With Analytics 360 and Marketing Cloud integrated, you can connect customer site engagement from Analytics 360 into your Marketing Cloud reporting. Marketers can now see campaign metrics like conversion rate and site engagement metrics like time spent on site related to Marketing Cloud campaigns. No more switching back and forth between platforms — instead, you can see end-to-end campaign performance right within Marketing Cloud.

To learn more about the integration between Analytics 360 and Marketing Cloud, you can check out our new feature brief here or reach out to your Analytics 360 sales team.

Advancing research on fake audio detection

When you listen to Google Maps driving directions in your car, get answers from your Google Home, or hear a spoken translation in Google Translate, you're using Google's speech synthesis, or text-to-speech (TTS) technology. Speech interfaces not only allow you to interact naturally and conveniently with digital devices, they're a crucial technology for making information universally accessible: TTS opens up the internet to millions of users all over the world who may not be able to read, or who have visual impairments.


Over the last few years, there’s been an explosion of new research using neural networks to simulate a human voice. These models, including many developed at Google, can generate increasingly realistic, human-like speech.


While the progress is exciting, we’re keenly aware of the risks this technology can pose if used with the intent to cause harm. Malicious actors may synthesize speech to try to fool voice authentication systems, or they may create forged audio recordings to defame public figures. Perhaps equally concerning, public awareness of "deep fakes" (audio or video clips generated by deep learning models) can be exploited to manipulate trust in media: as it becomes harder to distinguish real from tampered content, bad actors can more credibly claim that authentic data is fake.


We're taking action. When we launched the Google News Initiative last March, we committed to releasing datasets that would help advance state-of-the-art research on fake audio detection.  Today, we're delivering on that promise: Google AI and Google News Initiative have partnered to create a body of synthetic speech containing thousands of phrases spoken by our deep learning TTS models. These phrases are drawn from English newspaper articles, and are spoken by 68 synthetic "voices" covering a variety of regional accents.  


We're making this dataset available to all participants in the independent, externally-run 2019 ASVspoof challenge. This open challenge invites researchers all over the globe to submit countermeasures against fake (or "spoofed") speech, with the goal of making automatic speaker verification (ASV) systems more secure. By training models on both real and computer-generated speech, ASVspoof participants can develop systems that learn to distinguish between the two. The results will be announced in September at the 2019 Interspeech conference in Graz, Austria.


As we published in our AI Principles last year, we take seriously our responsibility both to engage with the external research community, and to apply strong safety practices to avoid unintended results that create risks of harm. We're also firmly committed to Google News Initiative's charter to help journalism thrive in the digital age, and our support for the ASVspoof challenge is an important step along the way.

Supporting the military community for whatever’s next

In August 2018, Google made a commitment to veterans, military spouses, and service members transitioning to civilian careers. At that time, we announced a job search experience that uses military occupational specialty codes to connect service members and veterans with open jobs that call for skills developed during their time in service.

In the months since, we’ve continued our work to make it even more useful for those who are searching for civilian jobs and the amazing people who support and guide them. People like Kristen Rheinlander, who works as the Transition Site Manager of the USO Pathfinder Program at Fort Hood, Texas. A self-described Army brat whose father served in the military for 25 years, Kristen came to the USO as a volunteer 4 years ago. Today, she heads up a team that works with service members and their families as they prepare for a new challenge: figuring out what comes next.

GWG image1

Every new challenge has a first step, and for Kristen, it starts with helping people see the connections between the skills they developed in the military and civilian jobs. By introducing her clients to the Google Search tool early in the process, she’s able to show them the types of occupations that align with their expertise, whether demand for a field is projected to grow, and active job listings in a given geographic area. It’s a confidence booster, she says—the search tool is a translator that “puts words to the unknown,” providing greater clarity for clients unsure of which roles, companies, and industries align with what they’re looking to do next. After finding a lead through the Google Search tool, Kristen works with her clients to begin crafting resumes that highlight their military experiences in language civilian employers use and understand.

Helping people find connections between skills developed in the military and civilian jobs is just one of the many ways we’re working to create useful tools and programs for transitioning service members, veterans, and military families—a community that’s sacrificed so much in service to our country. For the over 2.5 million veterans who’ve decided that their next step is owning their own business, we’ve created a “Veteran-Led” attribute for their Google My Business profiles. With this badge, veteran-led businesses stand out across Google Search and Maps. And for transitioning service members and military spouses who are interested in the growing field of IT support, we’ve made it easier for them to earn Google’s IT Support Professional Certificate through a $2.5 million grant to the USO.

Visit Grow with Google to learn more about job search and our other tools and programs for veterans.

Through these resources, we’re working to help service members, veterans, and their families prepare #ForWhateversNext.

Source: Search


Creating Smart Shopping campaigns with the local inventory ads setting enabled will be rejected

Starting Feb 15, 2019, in all AdWords API versions and in the Google Ads API, we’re going to reject requests that attempt to create a Smart Shopping campaign with the local inventory ads setting enabled. The local inventory ads setting is equivalent to setting enableLocal to true in the AdWords API, and enable_local to true in the Google Ads API. Trying to set those fields to true when creating a Smart Shopping campaign will result in the OperationAccessDenied.OPERATION_NOT_PERMITTED_FOR_CAMPAIGN_TYPE error. Previously, those fields were ignored when passed to the API servers.

Why is this happening?
Throwing an error for the requests described above provides an alert to users that local inventory ads are not supported in Smart Shopping campaigns.

What should you do?
Make sure you do not have code that creates a Smart Shopping campaign with local inventory ads enabled. Specifically, when you create a ShoppingSetting object for a Smart Shopping campaign, take either of the following actions: Follow the guides below for details on how to create a Smart Shopping campaign: As always, if you have any questions or concerns, please post them on our forum.

The movement to power Puerto Rico with the sun

On September 16, 2017, Hurricane Maria, the worst natural disaster on record to affect Puerto Rico, left people without homes or electricity. Eight months later, over 1,000 households were still without power. So communities across the island set out to find creative ways to generate electricity.


Image 1

After the disaster, the government of Puerto Rico committed to ambitious plans to transform its hurricane-battered electric grid to rely entirely on renewable energy by 2050. Project Sunroof maps the solar potential for buildings, in an effort to support the world’s transition to a renewable energy future. After the hurricane, we worked quickly to integrate Project Sunroof data covering Puerto Rico with Sunrun, a residential solar, storage and energy services company. Sunrun streamlined designs and installations across local installers to offer solar-as-a-service and home battery solutions to households, local fire stations and small businesses in Puerto Rico. For example, Maximo Solar, one of the leading solar installers on the west side of the island, used Project Sunroof data to support over 100 installations.

6

Of the 44,000 Puerto Rico rooftops that were surveyed by Project Sunroof, 90% of them were viable for solar—showing the longer term opportunity for island residents to harness renewable energy from the sun. By identifying the best locations to install solar panels, Project Sunroof data puts actionable insights in the hands of communities working towards energy independence, enables critical cost savings and reduces some of the complexities in the installation process.

Image 3

Responding to any crisis of the magnitude of Hurricane Maria is a complex endeavor, but Puerto Rico is a powerful example of how communities can respond rapidly to deploy solutions that improve and protect the livelihood of people. When put in the hands of local installers, solar information for Puerto Rico helped meet the urgent short term need for electricity and the movement towards a long term renewable energy future. Our work on Project Sunroof demonstrates one of Google’s many ongoing efforts to continue investing for the benefit of Puerto Rican residents and economic recovery efforts on the island.

Touchdown! Score with Search and Assistant for Sunday’s Big Game

As the Patriots and Rams head to Atlanta for Sunday’s big spectacle, you can turn to Search and the Google Assistant to stay-in-the-know and get some help prepping to watch the game. We’re taking a look at real-time Google Trends data to see the top questions and topics people are searching for and scoring some pro tips from the Assistant before kick-off.


State-by-State Showdown

We can’t say whether it’s fandom driving the searches, but this year’s AFC champs are dominating search interest in most states in the U.S.

SB Team Search

US: Search interest state-by-state, past week, as of January 30

Focus on the Field

If you’re gearing up for another G.O.A.T. debate, you can keep an eye on which players are capturing football fans’ attention. (Hint: looks like Brady is collecting searches like he is rings.)


SB Player Search 2019

US: Top searched Super Bowl players, past week, as of January 30

And the players won’t be the only things turning heads this Sunday. The much-anticipated halftime show also has people searching for this year’s performers Maroon 5, Travis Scott and Big Boi.

SB Halftime 2019

US: Top searched halftime performers, past week, as of January 30

Game Day Grub

Regardless of your football or musical preferences, snacks are something that everyone can get excited about. For inspiration on what to make, look no further than the most uniquely-searched Super Bowl recipes by state this year.

Map of Super Bowl Foods State-by-state

To stay up-to-date on other trends that are topping the charts this weekend, follow along on our Google Trends page.


Call an Audible

Everyone has their own game plan for Sunday, and the Assistant has simple ways to help you get in the game—even if football isn’t typically your thing.

  • Whether you’re a seasoned fan or new to the game, say “Hey Google, help me talk like a football fan” and the Assistant will help you speak like a pro. Insider tip: If the team you’re rooting for is on defense, just say “Put more pressure on the quarterback!”

  • This year, the Assistant will even share its prediction for how the game might play out. Simply ask, “Hey Google, who’s going to win the big game?” Hint: the Assistant can’t help but root for the underdog.

  • When it comes time to celebrate your team scoring, don’t forget to bring the Assistant along for the fun. Say it with me, now: “Hey Google, touchdown!”

No matter who you’re rooting for this year, Search and the Assistant are here to help you get game day ready.

Source: Search


Dynamic Rendering with Rendertron

Many frontend frameworks rely on JavaScript to show content. This can mean Google might take some time to index your content or update the indexed content. 
A workaround we discussed at Google I/O this year is dynamic rendering. There are many ways to implement this. This blog post shows an example implementation of dynamic rendering using Rendertron, which is an open source solution based on headless Chromium.

Which sites should consider dynamic rendering?


Not all search engines or social media bots visiting your website can run JavaScript. Googlebot might take time to run your JavaScript and has some limitations, for example. 
Dynamic rendering is useful for content that changes often and needs JavaScript to display.
Your site's user experience (especially the time to first meaningful paint) may benefit from considering hybrid rendering (for example, Angular Universal).

How does dynamic rendering work?


Dynamic rendering means switching between client-side rendered and pre-rendered content for specific user agents.

You will need a renderer to execute the JavaScript and produce static HTML. Rendertron is an open source project that uses headless Chromium to render. Single Page Apps often load data in the background or defer work to render their content. Rendertron has mechanisms to determine when a website has completed rendering. It waits until all network requests have finished and there is no outstanding work.

This post covers:
  1. Take a look at a sample web app
  2. Set up a small express.js server to serve the web app
  3. Install and configure Rendertron as a middleware for dynamic rendering

The sample web app

The “kitten corner” web app uses JavaScript to load a variety of cat images from an API and displays them in a grid.

Cute cat images in a grid and a button to show more - this web app truly has it all!
Here is the JavaScript:


  
const apiUrl = 'https://api.thecatapi.com/v1/images/search?limit=50';
  const tpl = document.querySelector('template').content;
  const container = document.querySelector('ul');

  function init () {
    fetch(apiUrl)
    .then(response => response.json())
    .then(cats => {
      container.innerHTML = '';
      cats
        .map(cat => {
          const li = document.importNode(tpl, true);
          li.querySelector('img').src = cat.url;
          return li;
        }).forEach(li => container.appendChild(li));
    })
  }

  init();

  document.querySelector('button').addEventListener('click', init);

The web app uses modern JavaScript (ES6), which isn't supported in Googlebot yet. We can use the mobile-friendly test to check if Googlebot can see the content:
The mobile-friendly test shows that the page is mobile-friendly, but the screenshot is missing all the cats! The headline and button appear but none of the cat pictures are there.
While this problem is simple to fix, it's a good exercise to learn how to setup dynamic rendering. Dynamic rendering will allow Googlebot to see the cat pictures without changes to the web app code.

Set up the server

To serve the web application, let's use express, a node.js library, to build web servers.
The server code looks like this (find the full project source code here):

const express = require('express');
const app = express();

const DIST_FOLDER = process.cwd() + '/docs';
const PORT = process.env.PORT || 8080;

// Serve static assets (images, css, etc.)
app.get('*.*', express.static(DIST_FOLDER));

// Point all other URLs to index.html for our single page app
app.get('*', (req, res) => {
 res.sendFile(DIST_FOLDER + '/index.html');
});

// Start Express Server
app.listen(PORT, () => {
 console.log(`Node Express server listening on http://localhost:${PORT} from ${DIST_FOLDER}`);
});

You can try the live example here - you should see a bunch of cat pictures, if you are using a modern browser. To run the project from your computer, you need node.js to run the following commands:
npm install --save express rendertron-middleware node server.js
Then point your browser to http://localhost:8080. Now it’s time to set up dynamic rendering.

Deploy a Rendertron instance

Rendertron runs a server that takes a URL and returns static HTML for the URL by using headless Chromium. We'll follow the recommendation from the Rendertron project and use Google Cloud Platform.
The form to create a new Google Cloud Platform project.
Please note that you can get started with the free usage tier, using this setup in production may incur costs according to the Google Cloud Platform pricing.

  1. Create a new project in the Google Cloud console. Take note of the “Project ID” below the input field.
  2. Clone the Rendertron repository from GitHub with:
    git clone https://github.com/GoogleChrome/rendertron.git 
    cd rendertron 
  3. Run the following commands to install dependencies and build Rendertron on your computer:
    npm install && npm run build
  4. Enable Rendertron’s cache by creating a new file called config.json in the rendertron directory with the following content:
    { "datastoreCache": true }
  5. Run the following command from the rendertron directory. Substitute YOUR_PROJECT_ID with your project ID from step 1.
    gcloud app deploy app.yaml --project YOUR_PROJECT_ID

  6. Select a region of your choice and confirm the deployment. Wait for it to finish.

  7. Enter the URL YOUR_PROJECT_ID.appspot.com (substitute YOUR_PROJECT_ID for your actual project ID from step 1 in your browser. You should see Rendertron’s interface with an input field and a few buttons.
Rendertron’s UI after deploying to Google Cloud Platform
When you see the Rendertron web interface, you have successfully deployed your own Rendertron instance. Take note of your project’s URL (YOUR_PROJECT_ID.appspot.com) as you will need it in the next part of the process.

Add Rendertron to the server

The web server is using express.js and Rendertron has an express.js middleware. Run the following command in the directory of the server.js file:

npm install --save rendertron-middleware

This command installs the rendertron-middleware from npm so we can add it to the server:

const express = require('express');
const app = express();
const rendertron = require('rendertron-middleware');

Configure the bot list

Rendertron uses the user-agent HTTP header to determine if a request comes from a bot or a user’s browser. It has a well-maintained list of bot user agents to compare with. By default this list does not include Googlebot, because Googlebot can execute JavaScript. To make Rendertron render Googlebot requests as well, add Googlebot to the list of user agents:

const BOTS = rendertron.botUserAgents.concat('googlebot');
const BOT_UA_PATTERN = new RegExp(BOTS.join('|'), 'i');

Rendertron compares the user-agent header against this regular expression later.

Add the middleware

To send bot requests to the Rendertron instance, we need to add the middleware to our express.js server. The middleware checks the requesting user agent and forwards requests from known bots to the Rendertron instance. Add the following code to server.js and don’t forget to substitute “YOUR_PROJECT_ID” with your Google Cloud Platform project ID:

app.use(rendertron.makeMiddleware({
 proxyUrl: 'https://YOUR_PROJECT_ID.appspot.com/render',
 userAgentPattern: BOT_UA_PATTERN
}));

Bots requesting the sample website receive the static HTML from Rendertron, so the bots don’t need to run JavaScript to display the content.

Testing our setup

To test if the Rendertron setup was successful, run the mobile-friendly test again.
Unlike the first test, the cat pictures are visible. In the HTML tab we can see all HTML the JavaScript code generated and that Rendertron has removed the need for JavaScript to display the content.

Conclusion

You created a dynamic rendering setup without making any changes to the web app. With these changes, you can serve a static HTML version of the web app to crawlers.

Post content