Author Archives: Google Devs

Google AMP Cache, AMP Lite, and the need for speed

Posted by Huibao Lin and Eyal Peled, Software Engineers, Google


At Google we believe in designing products with speed as a core principle. The Accelerated Mobile Pages (AMP) format helps ensure that content reliably loads fast, but we can do even better.

Smart caching is one of the key ingredients in the near instant AMP experiences users get in products like Google Search and Google News & Weather. With caching, we can make content be, in general, physically closer to the users who are requesting it so that bytes take a shorter trip over the wire to reach the user. In addition, using a single common infrastructure like a cache provides greater consistency in page serving times despite the content originating from many hosts, which might have very different—and much larger—latency in serving the content as compared to the cache.

Faster and more consistent delivery are the major reasons why pages served in Google Search's AMP experience come from the Google AMP Cache. The Cache's unified content serving infrastructure opens up the exciting possibility to build optimizations that scale to improve the experience across hundreds of millions of documents served. Making it so that any document would be able to take advantage of these benefits is one of the main reasons the Google AMP Cache is available for free to anyone to use.

In this post, we'll highlight two improvements we've recently introduced: (1) optimized image delivery and (2) enabling content to be served more successfully in bandwidth-constrained conditions through a project called "AMP Lite."

Image optimizations by the Google AMP Cache


On average across the web, images make up 64% of the bytesof a page. This means images are a very promising target for impactful optimizations.

Applying image optimizations is an effective way for cutting bytes on the wire. The Google AMP Cache employs the image optimization stack used by the PageSpeed Modules and Chrome Data Compression. (Note that in order to make the above transformations, the Google AMP Cache disregards the "Cache-Control: no-transform" header.) Sites can get the same image optimizations on their origin by installing PageSpeed on their server.

Here's a rundown of some of the optimizations we've made:

1) Removing data which is invisible or difficult to see
We remove image data that is invisible to users, such as thumbnail and geolocation metadata. For JPEG images, we also reduce quality and color samples if they are higher than necessary. To be exact, we reduce JPEG quality to 85 and color samples to 4:2:0 — i.e., one color sample per four pixels. Compressing a JPEG to quality higher than this or with more color samples takes more bytes, but the visual difference is difficult to notice.

The reduced image data is then exhaustively compressed. We've found that these optimizations reduce bytes by 40%+ while not being noticeable to the user's eye.

2) Converting images to WebP format
Some image formats are more mobile-friendly. We convert JPEG to WebP for supported browsers. This transformation leads to an additional 25%+ reduction in bytes with no loss in quality.

3) Adding srcset
We add "srcset" if it has not been included. This applies to "amp-img" tags with "src" but no "srcset" attribute. The operation includes expanding "amp-img" tag as well as resizing the image to multiple dimensions. This reduces the byte count further on devices with small screens.

4) Using lower quality images under some circumstances
We decrease the quality of JPEG images when there is an indication that this is desired by the user or for very slow network conditions (as part of AMP Lite discussed below). For example, we reduce JPEG image quality to 50 for Chrome users who have turned on Data Saver. This transformation leads to another 40%+ byte reduction to JPEG images.

The following example shows the images before (left) and after(right) optimizations. Originally the image has 241,260 bytes, and after applying Optimizations 1, 2, & 4 it becomes 25,760 bytes. After the optimizations the image looks essentially the same, but 89% of the bytes have been saved.



AMP Lite for Slow Network Conditions


Many people around the world access the internet with slow connection speeds or on devices with low RAM and we've found that some AMP pages are not optimized for these severely bandwidth constrained users. For this reason, Google has also launched AMP Lite to remove even more bytes from AMP pages for these users.

With AMP Lite, we apply all of the above optimizations to images. In particular, we always use lower quality levels (see Bullet 4 above).

In addition, we optimize external fonts by using the amp-fonttag, setting the font loading timeout to 0 seconds so pages can be displayed immediately regardless of whether the external font was previously cached or not.

AMP Lite is rolling out for bandwidth-constrained users in several countries such as Vietnam and Malaysia and for holders of low ram devices globally. Note that these optimizations may modify the fine details of some images, but do not affect other parts of the page including ads.

* * *

All told, we see a combined 45% reduction in bytes across all optimizations listed above.
We hope to go even further in making more efficient use of users' data to provide even faster AMP experiences.

Some Tips for Boosting your App’s Quality in 2017

Originally posted on the Firebase blog by Doug Stevenson /Developer Advocate


I've got to come clean with everyone: I'm making no new year's resolutions for 2017. Nor did I make any in 2016. In fact, I don't think I've ever made one! It's not so much that I take a dim view of new year's resolutions. I simply suspect that I would likely break them by the end of the week, and feel bad about it!

One thing I've found helpful in the past is to see the new year as a pivoting point to try new things, and also improve the work I'm already doing. For me (and I hope for you too!), 2017 will be a fresh year for boosting app quality.

The phrase "app quality" can take a bunch of different meanings, based on what you value in the software you create and use. As a developer, traditionally, this means fixing bugs that cause problems for your users. It could also be a reflection of the amount of delight your users take in your app. All of this gets wrapped up into the one primary metric that we have to gauge the quality of a mobile app, which is your app's rating on the store where it's published. I'm sure every one of you who has an app on a storefront has paid much attention to the app's rating at some point!

Firebase provides some tools you can use to boost your app's quality, and if you're not already using them, maybe a fresh look at those tools would be helpful this year?

Firebase Crash Reporting


The easiest tool to get started with is Firebase Crash Reporting. It takes little to no lines of code to integrate it into your iOS and Android app, and once you do, the Firebase console will start showing crashes that are happening to your users. This gives you a "hit list" of problems to fix.
One thing I find ironic about being involved with the Crash Reporting team is how we view the influx of total crashes received as we monitor our system. Like any good developer product, we strive to grow adoption, which means we celebrate graphs that go "up and to the right". So, in a strange sense, we like to see more crashes, because that means more developers are using our stuff! But for all of you developers out there, more crashes is obviously a *bad* thing, and you want to make those numbers go down! So, please, don't be like us - make your crash report graphs go down and to the right in 2017!

Firebase Test Lab for Android


Even better than fixing problems for your users is fixing those problems before they even reach your users. For your Android apps, you can use Firebase Test Lab to help ensure that your apps work great for your users among a growing variety of actual devices that we manage. Traditionally, it's been kind of a pain to acquire and manage a good selection of devices for testing. However, with Test Lab, you simply upload your APK and tests, and it will install and run them to our devices. After the tests complete, we'll provide all the screenshots, videos, and logs of everything that happened for you to examine in the Firebase console.

With Firebase Test Lab for Android now available with generous daily quotas at no charge for projects on the free Spark tier, 2017 is a great time to get started with that. And, if you haven't set up your Android app builds in a continuous integration environment, you could set that up, then configure it to run your tests automatically on Test Lab.

If you're the kind of person who likes writing tests for your code (which is, admittedly, not very many of us!), it's natural to get those tests running on Test Lab. But, for those of us who aren't maintaining a test suite with our codebase, we can still use Test Lab's automated Robo test to get automated test coverage right away, with no additional lines of code required. That's not quite that same as having a comprehensive suite of tests, so maybe 2017 would be a good time to learn more about architecting "testable" apps, and how those practices can raise the bar of quality for your app. I'm planning on writing more about this later this year, so stay tuned to the Firebase Blog for more!

Firebase Remote Config


At its core, Firebase Remote Config is a tool that lets you configure your app using parameters that you set up in the Firebase console. It can be used to help manage the quality of your app, and there's a couple neat tricks you can do with it. Maybe this new year brings new opportunities to give them a try!

First of all, you can use Remote Config to carefully roll out a new feature to your users. It works like this:
  1. Code your new feature and restrict its access to the user by a Remote Config boolean parameter. If the value is 'false', your users don't see the feature. Make 'false' the default value in the app.
  2. Configure that parameter in the Firebase console to also be initially 'false' for everyone.
  3. Publish your app to the store.
  4. When it's time to start rolling out the new feature to a small segment of users, configure the parameter to be 'true' for, say, five percent of your user base.
  5. Stay alert for new crashes in Firebase Crash Reporting, as well as feedback from your users.
  6. If there is a problem with the new feature, immediately roll back the new feature by setting the parameter to 'false' in the console for everyone.
  7. Or, if things are looking good, increase the percentage over time until you reach 100% of your users.


This is much safer than publishing your new feature to everyone with a single app update, because now you have the option to immediately disable a serious problem, and without having to build and publish a whole new version of your app. And, if you can act quickly, most of your users will never encounter the problem to begin with. This works well with the email alerts you get from Firebase Crash Reporting when a new crash is observed.

Another feature of Remote Config is the ability to experiment with some aspect of your app in order to find out what works better for the users of your app, then measure the results in Firebase Analytics. I don't know about you, but I'm typically pretty bad at guessing what people actually prefer, and sometimes I'm surprised at how people might actually *use* an app! Don't guess - instead, do an experiment and know /for certain/ what delights your users more! It stands to reason that apps finely tuned like this can get better ratings and make more money.

Firebase Realtime Database


It makes sense that if you make it easier for you user to perform tasks in your app, they will enjoy using it more, and they will come back more frequently. One thing I have always disliked is having to check for new information by refreshing, or navigating back and forward again. Apps that are always fresh and up to date, without requiring me to take action, are more pleasant to use.

You can achieve this for your app by making effective use of Firebase Realtime Databaseto deliver relevant data directly to your users at the moment it changes in the database. Realtime Database is reactive by nature, because the client API is designed for you set up listeners at data locations that get triggered in the event of a change. This is far more convenient than having to poll an API endpoint repeatedly to check for changes, and also much more respectful of the user's mobile data and battery life. Users associate this feeling of delight with apps of high quality.

What does 2017 have in store for your app?


I hope you'll join me this year in putting more effort into making our users even more delighted. If you're with me, feel free to tweet me at @CodingDoug and tell me what you're up to in 2017!

Start with a line, let the planet complete the picture

Jeff Nusz, Data Arts Team

Take a break this holiday season and paint with satellite images of the Earth through a new experiment called Land Lines. The project lets you explore Google Earth images in unexpected ways through gesture. Earth provides the palette; your fingers, the paintbrush.
There are two ways to explore–drag or draw. "Draw" to find satellite images that match your every line. "Drag" to create an infinite line of connected rivers, highways and coastlines. Here's a quick demo:


Everything runs in real time in your phone's web browser without any servers. The responsiveness of the project is a result of using machine learning, data optimization, and vantage-point trees to analyze the images and store that data.

We preprocessed the images using a combination of Open CV's Structured Forests machine learning based edge detection and ImageJ's Ridge Detection library. This culled the initial dataset of over fifty thousand high res images down to just a few thousand selected for their presence of lines, as shown in the example below. What ordinarily would take days was completed in just a few hours.


Example output from the line detection processing. The dominant line is highlighted in red while secondary lines are highlighted in green.



In the drawing exploration, we stored the resulting data in a vantage-point tree. This enabled us to efficiently run gesture matching against all the images and have results appear in milliseconds.


An early example of gesture matching using vantage point trees, where the drawn input is on the right and the closest results on the left.




Another example of user gesture analysis, where the drawn input is on the right and the closest results on the left.



Built in collaboration with Zach Lieberman, Land Lines is an experiment in big visual data that explores themes of connection. We tried several machine learning libraries in our development process. The learnings from that experience can be found in the case study, while the project code is available open-source on Git Hub. Start with a line at g.co/landlines.

Formatting text with the Google Slides API

Originally posted on G Suite Developers blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

It's common knowledge that presentations utilize a set of images to impart ideas to the audience. As a result, one of the best practices for creating great slide decks is to minimize the overall amount of text. It means that if you do have text in a presentation, the (few) words you use must have higher impact and be visually appealing. This is even more true when the slides are generated by a software application, say using the Google Slides API, rather than being crafted by hand.

The G Suite team recently launched the first Slides API, opening up a whole new category of applications. Since then, we've published several videos to help you realize some of those possibilities, showing you how to replace text and images in slides as well as how to generate slides from spreadsheet data. To round out this trifecta of key API use cases, we're adding text formatting to the conversation.

Developers manipulate text in Google Slides by sending API requests. Similar to the Google Sheets API, these requests come in the form of JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some shape (shapeID) on a slide:

{
"insertText": {
"objectId": shapeID,
"text": "Hello World!\n"
}

In the video, developers learn that writing text, such as the request above, is less complex than reading or formatting because both the latter require developers to know how text on a slide is structured. Notice for writing that just the copy, and optionally an index, are all that's required. (That index defaults to zero if not provided.)

Assuming "Hello World!" has been successfully inserted in a shape on a slide, a request to bold just the "Hello" looks like this:

{
"updateTextStyle": {
"objectId": shapeID,
"style": {
"bold": true
},
"textRange": {
"type": "FIXED_RANGE",
"startIndex": 0,
"endIndex": 5
},
"fields": "bold"
}
If you've got at least one request, like the ones above, in an array named requests, you'd ask the API to execute them with just one call to the API, which in Python looks like this (assuming SLIDES is your service endpoint and the slide deck ID is deckID):
SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()

To better understand text structure & styling in Google Slides, check out the text concepts guidein the documentation. For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. To see more samples for common API operations, take a look at this page. We hope the videos and all these developer resources help you create that next great app that automates producing highly impactful presentations for your users!

Google Developers YouTube Channel Reaches 1 Million Subscribers!

Posted by David Unger, Program Manager

Earlier this week, the Google Developers YouTube channel crossed the 1 million subscribers threshold. This is a monumental achievement for us, and we are extremely honored that so many of you have found our content valuable enough to click that red ‘Subscribe’ button.
The Google Developers YouTube channel has been bringing you content for just over 8 years, covering major developer events, like Google I/O and Playtime, as well as providing best practices on the latest tools to help you build high quality experiences! In that time, we’ve shared over 2,000 videos that have been viewed over 100 million times. Here is a look back at how we got to this milestone:

We are gearing up for another year of videos to help developers all over the world. To avoid missing any of it, you can subscribe to each of our YouTube channels using the following links:
Google Developers | Android Developers | Chrome Developers | Firebase

Scratch Blocks update: Making it easier to develop coding apps for kids

Posted by Champika Fernando, Product Manager, Google, and Kasia Chmielinski, Product Lead, MIT Scratch Team

We want to empower developers to build great creative learning apps for kids. That's why, earlier this year, we announced Scratch Blocks, a free, open-source project created by the MIT Scratch and Google Kids Coding teams. Together, we are building this highly tinkerableand playful block-based programming grammar based on MIT's popular Scratch language and Blockly's architecture. With Scratch Blocks, developers can integrate Scratch-style coding into apps for kids.

Today, we're excited to share our progress in a number of areas:

1. Designing for tinkerability

Research from the MIT Media Lab has highlighted the importance of providing children with opportunities for quick experimentation and rapid cycles of iteration. For example, the Scratch programming environment makes it easy for kids to adjust the code while it's running, as well as try coding blocks by just tapping on them. Since our initial announcement in May, we've focused on supporting this type of tinkerability in the Scratch Blocks project by making it very easy for developers to connect Scratch Blocks directly to the Scratch VM (a related open-source project being developed by MIT). In this model, instead of blocks being converted to a text-based language like JavaScript which is then interpreted, the blocks themselves are the code. The result is a more tinkerable experience for the end-user.

2. Designing for all levels

Computational thinking1 is a valuable skill for everyone. In order to support developers building a wide diversity of coding experiences with Scratch Blocks, we've designed two related block grammars that can be used in a variety of contexts. One grammar uses icon-based blocks that connect horizontally, while the other uses text-based blocks that connect vertically.

We started by developing the horizontal grammar, which is well-suited for beginners of all ages due to its simplicity and limited number of blocks; additionally, this grammar is easier to manipulate on small screens. You can see an example of the horizontal icon-based grammar in Code a Snowflake (an activity in this year's Google Santa Tracker) built by the Google Kids Coding Team. More recently, we've added the vertical grammar, which supports a wider range of complex concepts. The horizontal grammar can also be translated into vertical blocks, making it possible to transition between the grammars. We imagine this will be useful in a number of situations, including designing for multiple screen sizes, or as an element of the app's learning experience.

3. Designing for all devices

We're building Scratch Blocks for a world that is increasingly mobile, where people of all ages will tinker with code in a variety of environments. We've improved the mobile experience in many key areas, both in Scratch Blocks as well as the underlying Blockly project:

  • Redesigned blocks for improved touchability
  • Fast loading of large projects on low-powered devices
  • Optimization of block manipulation and code editing on touch screens
  • Redesigned multi-touch support for a better experience on touch devices

How to get involved

These first six months of Scratch Blocks have been a lot of work - and a ton of fun. To stay up to date on the project, check out our Github project, and learn more on our Developer Page.

[1] Learn more about Computational Thinking

Announcing updates to Google’s Internet of Things platform: Android Things and Weave

Originally posted on Android Developer Blog

Posted by Wayne Piekarski, Developer Advocate for IoT

The Internet of Things (IoT) will bring computing to a whole new range of devices. Today we're announcing two important updates to our IoT developer platform to make it faster and easier for you to create these smart, connected products.

We're releasing a Developer Preview of Android Things, a comprehensive way to build IoT products with the power of Android, one of the world's most supported operating systems. Now any Android developer can quickly build a smart device using Android APIs and Google services, while staying highly secure with updates direct from Google. We incorporated the feedback from Project Brillo to include familiar tools such as Android Studio, the Android Software Development Kit (SDK), Google Play Services, and Google Cloud Platform. And in the coming months, we will provide Developer Preview updates to bring you the infrastructure for securely pushing regular OS patches, security fixes, and your own updates, as well as built-in Weave connectivity and more.

There are several turnkey hardware solutions available for you to get started building real products with Android Things today, including Intel Edison, NXP Pico, and Raspberry Pi 3. You can easily scale to large production runs with custom designs of these solutions, while continuing to use the same Board Support Package (BSP) from Google.

We are also updating the Weave platform to make it easier for all types of devices to connect to the cloud and interact with services like the Google Assistant. Device makers like Philips Hue and Samsung SmartThings already use Weave, and several others like Belkin WeMo, LiFX, Honeywell, Wink, TP-Link, and First Alert are implementing it. Weave provides all the cloud infrastructure, so that developers can focus on building their products without investing in cloud services. Weave also includes a Device SDK for supported microcontrollers and a management console. The Weave Device SDK currently supports schemas for light bulbs, smart plugs and switches, and thermostats. In the coming months we will be adding support for additional device types, custom schemas/traits, and a mobile application API for Android and iOS. Finally, we're also working towards merging Weave and Nest Weave to enable all classes of devices to connect with each other in a secure and reliable way. So whether you started with Google Weave or Nest Weave, there is a path forward in the ecosystem.

This is just the beginning of the IoT ecosystem we want to build with you. To get started, check out Google's IoT developer site, or go directly to the Android Things, Weave, and Google Cloud Platform sites for documentation and code samples. You can also join Google's IoT Developers Community on Google+ to get the latest updates and share and discuss ideas with other developers.

Start building Actions on Google

Posted by Jason Douglas, PM Director for Actions on Google

The Google Assistant brings together all of the technology and smarts we've been building for years, from the Knowledge Graph to Natural Language Processing. To be a truly successful Assistant, it should be able to connect users across the apps and services in their lives. This makes enabling an ecosystem where developers can bring diverse and unique services to users through the Google Assistant really important.

In October, we previewedActions on Google, the developer platform for the Google Assistant. Actions on Google further enhances the Assistant user experience by enabling you to bring your services to the Assistant. Starting today, you can build Conversation Actions for Google Home and request to become an early access partner for upcoming platform features.

Conversation Actions for Google Home

Conversation Actions let you engage your users to deliver information, services, and assistance. And the best part? It really is a conversation -- users won't need to enable a skill or install an app, they can just ask to talk to your action. For now, we've provided two developer samples of what's possible, just say "Ok Google, talk to Number Genie " or try "Ok Google, talk to Eliza' for the classic 1960s AI exercise.

You can get started today by visiting the Actions on Google website for developers. To help create a smooth, straightforward development experience, we worked with a number of development partners, including conversational interaction development tools API.AI and Gupshup, analytics tools DashBot and VoiceLabs and consulting companies such as Assist, Notify.IO, Witlingo and Spoken Layer. We also created a collection of samples and voice user interface (VUI) resources or you can check out the integrations from our early access partners as they roll out over the coming weeks.

Introduction to Conversation Actions by Wayne Piekarski

Coming soon: Actions for Pixel and Allo + Support for Purchases and Bookings

Today is just the start, and we're excited to see what you build for the Google Assistant. We'll continue to add more platform capabilities over time, including the ability to make your integrations available across the various Assistant surfaces like Pixel phones and Google Allo. We'll also enable support for purchases and bookings as well as deeper Assistant integrations across verticals. Developers who are interested in creating actions using these upcoming features should register for our early access partner program and help shape the future of the platform.

Build, explore and let us know what you think about Actions on Google! And to say in the loop, be sure to sign up for our newsletter, join our Google+ community, and use the “actions-on-google” tag on StackOverflow.

Modifying email signatures with the Gmail API

Originally posted on G Suite Developer blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

The Gmail API team introduced a new settings feature earlier this year, and today, we're going to explore some of that goodness, showing developers how to update Gmail user settings with the API.

Email continues to be a dominant form of communication, personally and professionally, and our email signature serves as both a lightweight introduction and a business card. It's also a way to slip-in a sprinkling of your personality. Wouldn't it be interesting if you could automatically change your signature whenever you wanted without using the Gmail settings interface every time? That is exactly what our latest video is all about.

If your app has already created a Gmail API service endpoint, say in a variable named GMAIL, and you have the YOUR_EMAIL email address whose signature should be changed as well as the text of the new signature, updating it via the API is as pretty straightforward, as illustrated by this Python call to the GMAIL.users().settings().sendAs().patch() method:

signature = {'signature': '"I heart cats."  ~anonymous'}
GMAIL.users().settings().sendAs().patch(userId='me',
sendAsEmail=YOUR_EMAIL, body=signature).execute()

For more details about the code sample used in the requests above as well as in the video, check out the deepdive post. In addition to email signatures, other settings the API can modify include: filters, forwarding (addresses and auto-forwarding), IMAP and POP settings to control external email access, and the vacation responder. Be aware that while API access to most settings are available for any G Suite Gmail account, a few sensitive operations, such as modifying send-as aliases or forwarding, are restricted to users with domain-wide authority.

Developers interested in using the Gmail API to access email threads and messages instead of settings can check out this other video where we show developers how to search for threads with a minimum number of messages, say to look for the most discussed topics from a mailing list. Regardless of your use-case, you can find out more about the Gmail API in the developer documentation. If you're new to the API, we suggest you start with the overview page which can point you in the right direction!

Be sure to subscribe to the Google Developers channel and check out other episodes in the G Suite Dev Show video series.

AMP Cache Updates

Posted by John Coiner, Software Engineer

Today we are announcing a change to the domain scheme of the Google AMP Cache. Beginning soon, the Google AMP Cache will serve each site from its own subdomain of https://cdn.ampproject.org. This change will allow content served from the Google AMP Cache to be protected by the fundamental security model of the web: the HTML5 origin.

No immediate changes are required for most publishers of AMP documents. However, to benefit from the additional security, it is recommended that all AMP publishers update their CORS implementation in preparation for the new Google AMP Cache URL scheme. The Google AMP Cache will continue to support existing URLs, but those URLs will eventually redirect to the new URL scheme.

How subdomain names will be created on the Google AMP Cache

The subdomains created by the Google AMP Cache will be human-readable when character limits and technical specs allow, and will closely resemble the publisher's own domain.

When possible, the Google AMP Cache will create each subdomain by first converting the AMP document domain from IDN (punycode) to UTF-8. Every "-" (dash) will be replaced with "--"(2 dashes) and every "." (dot) will be replaced with a "-" (dash). For example, pub.com will map to pub-com.cdn.ampproject.org. Where technical limitations prevent a human readable subdomain, a one-way hash will be used instead.

Updates needed for hosts and service providers with remote endpoints

Due to the changes described above, CORS endpoints will begin seeing requests with new origins. The following updates will be required:

  • Expand request acceptance to the new subdomain: Sites that currently only accept CORS requests from https://cdn.ampproject.organd the publisher's own origins must update their systems to accept requests from https://[pub-com].cdn.ampproject.org, https://cdn.ampproject.org, and the AMP publisher's own origins.
  • Tighten request acceptance for security: Sites that currently accept CORS requests from https://*.ampproject.org as described in the AMP spec, can improve security by restricting acceptance to requests from https://[pub-com].cdn.ampproject.org, https://cdn.ampproject.org, and the AMP publisher's own origins. Support for https://*.ampproject.org is no longer necessary.
  • Support for new subdomain pattern by ads, analytics, and other technology providers: Service providers such as analytics and ads vendors that have a CORS endpoint will also need to ensure that their systems accept requests from the Google AMP Cache's subdomains (e.g.https://ampbyexample-com.cdn.ampproject.org), in addition to their own hosts.

Retrieving the Google AMP Cache URL

For platforms that display AMP documents and serve from the Google AMP Cache, the best way to retrieve Google AMP Cache URLs is to continue using the Google AMP Cache URL API. The Google AMP Cache URL API will be updated in Q1 2017 to return the new cache URL scheme that includes the subdomain.

You can use an interactive tool to find the Google AMP Cache subdomain generated for each site over at ampbyexample.com.


Timing

Google Search is planning to begin using the new URL scheme as soon as possible and is monitoring sites' compatibility. In addition, we will be reaching out to impacted parties, and we will make available a developer testing sandbox prior to launching to ensure a smooth transition.