Consumers will determine the future of news

In the 19th century, newspapers were rocked by a disruptive new technology. The telegraph allowed information to travel faster than ever before, worrying editors and journalists. Why would anyone read a newspaper if news could travel instantly through cables?
The telegraph meant readers expected news to be conveyed more efficiently. But the anticipated newspaper apocalypse never arrived. Far from bringing an end to the industry, the telegraph was co-opted by the best newspapers. Although the telegraph is obsolete today, the rapid and reliable delivery of information it enabled remains a hallmark of the newspaper industry.
Technology has shaped the way people consume news for centuries. Today, anyone with a smartphone can access an unprecedented number of news sources, while sharing content with friends and followers. Consumers are also using the Internet and mobile apps to engage with new forms of advertising, putting pressure on traditional ad-supported industries, including news publishers.
These changes in consumer and marketing behaviour have profound implications for traditional news business models. But they do not mean the death of journalism. In fact, our appetite for quality journalism is on the rise. According to Enhanced Media Metrics Australia, 90 per cent of Australians read Australian news media, and readership has been increasing.
I have read the AFR every day since high school. The way I read it has changed from print to a computer and now to a smartphone. What has remained constant is my need to be kept informed, whether on breaking business developments or the latest cricket results. Technology is the news industry’s strongest tool in satisfying the basic human need for good reporting.
We at Google are not content makers, we do not employ people to work as journalists, and we have no intention of becoming a news publisher. But we share an important common vision with the Australian news industry, which is to ensure that people have access to quality news and information. This is at the heart of our partnerships with publishers such as Fairfax Media. It is also why we support the Australian Competition and Consumer Commission’s inquiry. We have submitted our response and are ready to engage transparently and constructively with the Commission.
We are committed to securing journalism’s bright future in Australia by doing our part to make sure it works for newsrooms, news publishers and news consumers - and we are focusing on a number of areas.
First, newsrooms are looking to engage audiences better, so we are expanding our News Lab Fellowships in Australia in partnership with the Australian Broadcasting Corporation. We help journalists use technology to tell more compelling stories, and offer them insights on how their work resonates with readers through Google Analytics.
Second, publishers are endeavouring to grow their business, so we are partnering with them to increase their revenue through digital advertising. We are also helping to promote subscriptions through the integration of subscription content in Search and offering a simple one-click sign-on called Subscribe with Google. With Flexible Sampling, publishers decide how much free sampling to offer their potential subscribers.
Third, consumers are seeking more news on digital platforms, so we are improving the way Search delivers them to the most relevant and trusted sources. In the past calendar year, we directed more than 2 billion visits to Australian news websites — each visit an opportunity to gain a loyal subscriber.
Finally, consumers are also seeking a better news consumption experience. People quit sites that take more than three seconds to load. So we are helping publishers create web pages that load in less than half a second with our open-source Accelerated Mobile Pages format.
And, at all times, people should have transparency and control over how their personal data is used. That’s a responsibility we take very seriously at Google, and we are encouraged that Australians are increasingly aware of how to access and change - even delete - the data they have shared with Google. In 2017, Australians visited myaccount.google.com more than 22 million times to understand what data they share with Google and how it is used to create a more relevant experience for them.
The way that people consume news may change, but the need for quality journalism does not. Our goal is to support newsrooms in meeting the evolving expectations of their audience. Ultimately, consumers will be the ones who decide whether news publishers flourish, but on present form there is every reason to believe that they will.

I’m Feeling Earthy: Earth Day trends and more

It’s Earth Day—take a walk with us.

First, let’s dig into issues taking root in Search. Ahead of Earth Day, “solar energy,” “drought” and “endangered species” climbed in popularity this week. Meanwhile, people are looking for ways their own actions can make a positive impact. The top “how to recycle” searches were for plastic, paper, batteries, plastic bags, and styrofoam. And around the world, trending queries about Earth Day were “how many trees will be saved by recycling?” and “which type of plastic is more friendly to the environment?”  

To explore some of the other searches that are blooming for Earth Day, take a look at our trends page.

ed

In our corner of the world, Earth Day celebrations started on Google Earth’s first birthday (tweet at @googleearth with #ImFeelingEarthy and see where it takes you!). The party continues today with a special tribute to Jane Goodall in today’s Doodle, and kids inspired by the Doodle can create their own Google logo, thanks to our partnership with World Wildlife Fund. And while we’re feeling extra Earthy this week, the environment is important to our work all year long—here’s what we’re doing for our operations, our surroundings, our customers, and our community.

Leveraging AI to protect our users and the web



Recent advances in AI are transforming how we combat fraud and abuse and implement new security protections. These advances are critical to meeting our users’ expectations and keeping increasingly sophisticated attackers at bay, but they come with brand new challenges as well.

This week at RSA, we explored the intersection between AI, anti-abuse, and security in two talks.

Our first talk provided a concise overview of how we apply AI to fraud and abuse problems. The talk started by detailing the fundamental reasons why AI is key to building defenses that keep up with user expectations and combat increasingly sophisticated attacks. It then delved into the top 10 anti-abuse specific challenges encountered while applying AI to abuse fighting and how to overcome them. Check out the infographic at the end of the post for a quick overview of the challenges we covered during the talk.

Our second talk looked at attacks on ML models themselves and the ongoing effort to develop new defenses.

It covered attackers’ attempts to recover private training data, to introduce examples into the training set of a machine learning model to cause it to learn incorrect behaviors, to modify the input that a machine learning model receives at classification time to cause it to make a mistake, and more.

Our talk also looked at various defense solutions, including differential privacy, which provides a rigorous theoretical framework for preventing attackers from recovering private training data.

Hopefully you were to able to join us at RSA! But if not, here is re-recording and the slides of our first talk on applying AI to abuse-prevention, along with the slides from our second talk about protecting ML models.

Introducing the CVPR 2018 On-Device Visual Intelligence Challenge



Over the past year, there have been exciting innovations in the design of deep networks for vision applications on mobile devices, such as the MobileNet model family and integer quantization. Many of these innovations have been driven by performance metrics that focus on meaningful user experiences in real-world mobile applications, requiring inference to be both low-latency and accurate. While the accuracy of a deep network model can be conveniently estimated with well established benchmarks in the computer vision community, latency is surprisingly difficult to measure and no uniform metric has been established. This lack of measurement platforms and uniform metrics have hampered the development of performant mobile applications.

Today, we are happy to announce the On-device Visual Intelligence Challenge (OVIC), part of the Low-Power Image Recognition Challenge Workshop at the 2018 Computer Vision and Pattern Recognition conference (CVPR2018). A collaboration with Purdue University, the University of North Carolina and IEEE, OVIC is a public competition for real-time image classification that uses state-of-the-art Google technology to significantly lower the barrier to entry for mobile development. OVIC provides two key features to catalyze innovation: a unified latency metric and an evaluation platform.

A Unified Metric
OVIC focuses on the establishment of a unified metric aligned directly with accurate and performant operation on mobile devices. The metric is defined as the number of correct classifications within a specified per-image average time limit of 33ms. This latency limit allows every frame in a live 30 frames-per-second video to be processed, thus providing a seamless user experience1. Prior to OVIC, it was tricky to enforce such a limit due to the difficulty in accurately and uniformly measuring latency as would be experienced in real-world applications on real-world devices. Without a repeatable mobile development platform, researchers have relied primarily on approximate metrics for latency that are convenient to compute, such as the number of multiply-accumulate operations (MACs). The intuition is that multiply-accumulate constitutes the most time-consuming operation in a deep neural network, so their count should be indicative of the overall latency. However, these metrics are often poor predictors of on-device latency due to many aspects of the models that can impact the average latency of each MAC in typical implementations.
Even though the number of multiply-accumulate operations (# MACs) is the most commonly used metric to approximate on-device latency, it is a poor predictor of latency. Using data from various quantized and floating point MobileNet V1 and V2 based models, this graph plots on-device latency on a common reference device versus the number of MACs. It is clear that models with similar latency can have very different MACs, and vice versa.
The graph above shows that while the number of MACs is correlated with the inference latency, there is significant variation in the mapping. Thus number of MACs is a poor proxy for latency, and since latency directly affects users’ experiences, we believe it is paramount to optimize latency directly rather than focusing on limiting the number of MACs as a proxy.

An Evaluation Platform
As mentioned above, a primary issue with latency is that it has previously been challenging to measure reliably and repeatably, due to variations in implementation, running environment and hardware architectures. Recent successes in mobile development overcome these challenges with the help of a convenient mobile development platform, including optimized kernels for mobile CPUs, light-weight portable model formats, increasingly capable mobile devices, and more. However, these various platforms have traditionally required resources and development capabilities that are only available to larger universities and industry.

With that in mind, we are releasing OVIC’s evaluation platform that includes a number of components designed to make mobile development and evaluations that can be replicated and compared accessible to the broader research community:
  • TOCO compiler for optimizing TensorFlow models for efficient inference
  • TensorFlow Lite inference engine for mobile deployment
  • A benchmarking SDK that can be run locally on any Android phone
  • Sample models to showcase successful mobile architectures that run inference in floating-point and quantized modes
  • Google’s benchmarking tool for reliable latency measurements on specific Pixel phones (available to registered contestants).
Using these tools available in OVIC, a participant can conveniently incorporate measurement of on-device latency into their design loop without having to worry about optimizing kernels, purchasing latency/power measurement devices, or designing the framework to drive them. The only requirement for entry is experiences with training computer vision models in TensorFlow, which can be found in this tutorial.

With OVIC, we encourage the entire research community to improve the classification performance of low-latency high-accuracy models towards new frontiers, as shown in the following graphic.
Sampling of current MobileNet mobile models illustrating the tradeoff between increased accuracy and reduced latency.
We cordially invite you to participate here before the deadline on June 15th, and help us discover new mobile vision architectures that will propel development into the future.

Acknowledgements
We would like to acknowledge our core contributors Achille Brighton, Alec Go, Andrew Howard, Hartwig Adam, Mark Sandler and Xiao Zhang. We would also like to acknowledge our external collaborators Alex Berg and Yung-Hsiang Lu. We give special thanks to Andre Hentz, Andrew Selle, Benoit Jacob, Brad Krueger, Dmitry Kalenichenko, Megan Cummins, Pete Warden, Rajat Monga, Shiyu Hu and Yicheng Fan.


1 Alternatively the same metric could encourage even lower power operation by only processing a subset of the images in the input stream.



How Google autocomplete works in Search

Autocomplete is a feature within Google Search designed to make it faster to complete searches that you’re beginning to type. In this post—the second in a series that goes behind-the-scenes about Google Search—we’ll explore when, where and how autocomplete works.

Using autocomplete

Autocomplete is available most anywhere you find a Google search box, including the Google home page, the Google app for iOS and Android, the quick search box from within Android and the “Omnibox” address bar within Chrome. Just begin typing, and you’ll see predictions appear:

Autocomplete_1.png

In the example above, you can see that typing the letters “san f” brings up predictions such as “san francisco weather” or “san fernando mission,” making it easy to finish entering your search on these topics without typing all the letters.

Sometimes, we’ll also help you complete individual words and phrases, as you type:

Autocomplete_1_detail.png

Autocomplete is especially useful for those using mobile devices, making it easy to complete a search on a small screen where typing can be hard. For both mobile and desktop users, it’s a huge time saver all around. How much? Well:

  • On average, it reduces typing by about 25 percent
  • Cumulatively, we estimate it saves over 200 years of typing time per day. Yes, per day!

Predictions, not suggestions

You’ll notice we call these autocomplete “predictions” rather than “suggestions,” and there’s a good reason for that. Autocomplete is designed to help people complete a search they were intending to do, not to suggest new types of searches to be performed. These are our best predictions of the query you were likely to continue entering.

How do we determine these predictions? We look at the real searches that happen on Google and show common and trending ones relevant to the characters that are entered and also related to your location and previous searches.

The predictions change in response to new characters being entered into the search box. For example, going from “san f” to “san fe” causes the San Francisco-related predictions shown above to disappear, with those relating to San Fernando then appearing at the top of the list:

Autocomplete_2.png

That makes sense. It becomes clear from the additional letter that someone isn’t doing a search that would relate to San Francisco, so the predictions change to something more relevant.

Why some predictions are removed

The predictions we show are common and trending ones related to what someone begins to type. However, Google removes predictions that are against our autocomplete policies, which bar:


  • Sexually explicit predictions that are not related to medical, scientific, or sex education topics
  • Hateful predictions against groups and individuals on the basis of race, religion or several other demographics
  • Violent predictions
  • Dangerous and harmful activity in predictions

In addition to these policies, we may remove predictions that we determine to be spam, that are closely associated with piracy, or in response to valid legal requests.

A guiding principle here is that autocomplete should not shock users with unexpected or unwanted predictions.

This principle and our autocomplete policies are also why popular searches as measured in our Google Trends tool might not appear as predictions within autocomplete. Google Trends is designed as a way for anyone to deliberately research the popularity of search topics over time. Autocomplete removal policies are not used for Google Trends.

Why inappropriate predictions happen

We have systems in place designed to automatically catch inappropriate predictions and not show them. However, we process billions of searches per day, which in turn means we show many billions of predictions each day. Our systems aren’t perfect, and inappropriate predictions can get through. When we’re alerted to these, we strive to quickly remove them.

It’s worth noting that while some predictions may seem odd, shocking or cause a “Who would search for that!” reaction, looking at the actual search results they generate sometimes provides needed context. As we explained earlier this year, the search results themselves may make it clearer in some cases that predictions don’t necessarily reflect awful opinions that some may hold but instead may come from those seeking specific content that’s not problematic. It’s also important to note that predictions aren’t search results and don’t limit what you can search for.

Regardless, even if the context behind a prediction is good, even if a prediction is infrequent,  it’s still an issue if the prediction is inappropriate. It’s our job to reduce these as much as possible.

Our latest efforts against inappropriate predictions

To better deal with inappropriate predictions, we launched a feedback tool last year and have been using the data since to make improvements to our systems. In the coming weeks, expanded criteria applying to hate and violence will be in force for policy removals.

Our existing policy protecting groups and individuals against hateful predictions only covers cases involving race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. Our expanded policy for search will cover any case where predictions are reasonably perceived as hateful or prejudiced toward individuals and groups, without particular demographics.

With the greater protections for individuals and groups, there may be exceptions where compelling public interest allows for a prediction to be retained. With groups, predictions might also be retained if there’s clear “attribution of source” indicated. For example, predictions for song lyrics or book titles that might be sensitive may appear, but only when combined with words like “lyrics” or “book” or other cues that indicate a specific work is being sought.

As for violence, our policy will expand to cover removal of predictions which seem to advocate, glorify or trivialize violence and atrocities, or which disparage victims.

How to report inappropriate predictions

Our expanded policies will roll out in the coming weeks. We hope that the new policies, along with other efforts with our systems, will improve autocomplete overall. But with billions of predictions happening each day, we know that we won’t catch everything that’s inappropriate.

Should you spot something, you can report using the “Report inappropriate predictions” link we launched last year, which appears below the search box on desktop:

Search Autocomplete painted.png

For those on mobile or using the Google app for Android, long press on a prediction to get a reporting option. Those using the Google app on iOS can swipe to the left to get the reporting option.

By the way, if we take action on a reported prediction that violates our policies, we don’t just remove that particular prediction. We expand to ensure we’re also dealing with closely related predictions. Doing this work means sometimes an inappropriate prediction might not immediately disappear, but spending a little extra time means we can provide a broader solution.

Making predictions richer and more useful

As said above, our predictions show in search boxes that range from desktop to mobile to within our Google app. The appearance, order and some of the predictions themselves can vary along with this.

When you’re using Google on desktop, you’ll typically see up to 10 predictions. On a mobile device, you’ll typically see up to five, as there’s less screen space.

On mobile or Chrome on desktop, we may show you information like dates, the local weather, sports information and more below a prediction:

Autocomplete_3.png

In the Google app, you may also notice that some of the predictions have little logos or images next to them. That’s a sign that we have special Knowledge Graph information about that topic, structured information that’s often especially useful to mobile searchers:

Autocomplete_4.png

Predictions also will vary because the list may include any related past searches you’ve done. We show these to help you quickly get back to a previous search you may have conducted:

Autocomplete_5.png

You can tell if a past search is appearing because on desktop, you’ll see the word “Remove” appear next to a prediction. Click on that word if you want to delete the past search.

On mobile, you’ll see a clock icon on the left and an X button on the right. Click on the X to delete a past search. In the Google App, you’ll also see a clock icon. To remove a prediction, long press on it in Android or swipe left on iOS to reveal a delete option.

You can also delete all your past searches in bulk, or by particular dates or those matching particular terms using My Activity in your Google Account.

More about autocomplete

We hope this post has helped you understand more about autocomplete, including how we’re working to reduce inappropriate predictions and to increase the usefulness of the feature. For more, you can also see our help page about autocomplete.

You can also check out the recent Wired video interview below, where our our vice president of search Ben Gomes and the product manager of autocomplete Chris Haire answer questions about autocomplete that came from…autocomplete!

Simplifying apps, desktops and devices with Citrix and Chrome Enterprise

As cloud adoption continues to accelerate, many organizations have found they need an ever-expanding fleet of mobile devices so that employees can work wherever and whenever they need. And research shows that when employees can work from anywhere, they can do more. According to Forbes, employee mobility leads to 30 percent better processes and 23 percent more productivity.

But as the demand for mobility grows, many organizations have also found themselves challenged by the need to provide secure mobile endpoints with access to certain legacy line-of-business or Windows apps. To help, last year we announced our partnership with Citrix to bring XenApp and XenDesktop to Chrome Enterprise.

Since bringing XenApp and XenDesktop to Chrome Enterprise, we’ve worked extensively with Citrix to help more businesses embrace the cloud. Last month, we announced that admins can now manage Chromebooks through several popular enterprise mobility management (EMM) tools, including Citrix XenMobile. And this year at HIMSS we showed how the combination of Citrix and HealthCast on Chrome Enterprise helps healthcare workers access electronic health records and virtualized apps securely on Chrome OS using their proximity badge.

All of this is the topic of an IDG webinar we’re co-sponsoring with Citrix. The webinar “Chrome OS & Citrix: Simplify endpoint management and VDI strategy” includes IDG CSO SVP/Publisher Bob Bragdon, Chrome Enterprise Group Product Manager Eve Phillips, and Citrix Chief Security Strategist Kurt Roemer as speakers, and addresses how Citrix and Chrome enable access to mission-critical business apps and create a productive workforce inside or outside corporate infrastructure.

Here’s what the webinar will cover:

  • How Chrome and Citrix can ensure secure access to critical enterprise apps.
  • How employees can be more productive through access to legacy apps in VDI. 
  • How Citrix XenApp (XA) and XenDesktop (XD) integrate with Chrome OS.
  • How Citrix’s upcoming product launches and enhancements with Chrome, GCP and G Suite can help enterprise IT teams and end users.

In March, Citrix’s Todd Terbeek shared his experiences transitioning to Chrome Enterprise, and this week Chief Security Strategist Kurt Roemer discussed how combining Citrix with Chrome can deliver expanded value across security, privacy and compliance. Our work with Citrix continues to evolve, and we’re looking forward to finding new ways to collaborate in the future.

To learn more, sign up for the webinar.

Source: Google Cloud


Kubernetes best practices: How and why to build small container images



Editor’s note: Today marks the first installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment. Today he tackles the theory and practicalities of keeping your container images as small as possible.

Docker makes building containers a breeze. Just put a standard Dockerfile into your folder, run the docker ‘build’ command, and shazam! Your container image is built!

The downside of this simplicity is that it’s easy to build huge containers full of things you don’t need—including potential security holes.

In this episode of “Kubernetes Best Practices,” let’s explore how to create production-ready container images using Alpine Linux and the Docker builder pattern, and then run some benchmarks that can determine how these containers perform inside your Kubernetes cluster.

The process for creating containers images is different depending on whether you are using an interpreted language or a compiled language. Let’s dive in!

Containerizing interpreted languages


Interpreted languages, such as Ruby, Python, Node.js, PHP and others send source code through an interpreter that runs the code. This gives you the benefit of skipping the compilation step, but has the downside of requiring you to ship the interpreter along with the code.

Luckily, most of these languages offer pre-built Docker containers that include a lightweight environment that allows you to run much smaller containers.

Let’s take a Node.js application and containerize it. First, let’s use the “node:onbuild” Docker image as the base. The “onbuild” version of a Docker container pre-packages everything you need to run so you don’t need to perform a lot of configuration to get things working. This means the Dockerfile is very simple (only two lines!). But you pay the price in terms of disk size— almost 700MB!

FROM node:onbuild
EXPOSE 8080
By using a smaller base image such as Alpine, you can significantly cut down on the size of your container. Alpine Linux is a small and lightweight Linux distribution that is very popular with Docker users because it’s compatible with a lot of apps, while still keeping containers small.

Luckily, there is an official Alpine image for Node.js (as well as other popular languages) that has everything you need. Unlike the default “node” Docker image, “node:alpine” removes many files and programs, leaving only enough to run your app.

The Alpine Linux-based Dockerfile is a bit more complicated to create as you have to run a few commands that the onbuild image otherwise does for you.

FROM node:alpine
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --production
COPY server.js /app/server.js
EXPOSE 8080
CMD npm start
But, it’s worth it, because the resulting image is much smaller at only 65 MB!

Containerizing compiled languages


Compiled languages such as Go, C, C++, Rust, Haskell and others create binaries that can run without many external dependencies. This means you can build the binary ahead of time and ship it into production without having to ship the tools to create the binary such as the compiler.

With Docker’s support for multi-step builds, you can easily ship just the binary and a minimal amount of scaffolding. Let’s learn how.

Let’s take a Go application and containerize it using this pattern. First, let’s use the “golang:onbuild” Docker image as the base. As before, the Dockerfile is only two lines, but again you pay the price in terms of disk size—over 700MB!

FROM golang:onbuild
EXPOSE 8080
The next step is to use a slimmer base image, in this case the “golang:alpine” image. So far, this is the same process we followed for an interpreted language.

Again, creating the Dockerfile with an Alpine base image is a bit more complicated as you have to run a few commands that the onbuild image did for you.

FROM golang:alpine
WORKDIR /app
ADD . /app
RUN cd /app && go build -o goapp
EXPOSE 8080
ENTRYPOINT ./goapp

But again, the resulting image is much smaller, weighing in at only 256MB!
However, we can make the image even smaller: You don’t need any of the compilers or other build and debug tools that Go comes with, so you can remove them from the final container.

Let’s use a multi-step build to take the binary created by the golang:alpine container and package it by itself.

FROM golang:alpine AS build-env
WORKDIR /app
ADD . /app
RUN cd /app && go build -o goapp

FROM alpine
RUN apk update && \
   apk add ca-certificates && \
   update-ca-certificates && \
   rm -rf /var/cache/apk/*
WORKDIR /app
COPY --from=build-env /app/goapp /app
EXPOSE 8080
ENTRYPOINT ./goapp

Would you look at that! This container is only 12MB in size!
While building this container, you may notice that the Dockerfile does strange things such as manually installing HTTPS certificates into the container. This is because the base Alpine Linux ships with almost nothing pre-installed. So even though you need to manually install any and all dependencies, the end result is super small containers!

Note: If you want to save even more space, you could statically compile your app and use the “scratch” container. Using “scratch” as a base container means you are literally starting from scratch with no base layer at all. However, I recommend using Alpine as your base image rather than “scratch” because the few extra MBs in the Alpine image make it much easier to use standard tools and install dependencies.

Where to build and store your containers


In order to build and store the images, I highly recommend the combination of Google Container Builder and Google Container Registry. Container Builder is very fast and automatically pushes images to Container Registry. Most developers should easily get everything done in the free tier, and Container Registry is the same price as raw Google Cloud Storage (cheap!).

Platforms like Google Kubernetes Engine can securely pull images from Google Container Registry without any additional configuration, making things easy for you!

In addition, Container Registry gives you vulnerability scanning tools and IAM support out of the box. These tools can make it easier for you to secure and lock down your containers.

Evaluating performance of smaller containers


People claim that small containers’ big advantage is reduced time—both time-to-build and time-to-pull. Let’s test this, using containers created with onbuild, and ones created with Alpine in a multistage process!

TL;DR: No significant difference for powerful computers or Container Builder, but significant difference for smaller computers and shared systems (like many CI/CD systems). Small Images are always better in terms of absolute performance.

Building images on a large machine


For the first test, I am going to build using a pretty beefy laptop. I’m using our office WiFi, so the download speeds are pretty fast!


For each build, I remove all Docker images in my cache.

Build:
Go Onbuild: 35 Seconds
Go Multistage: 23 Seconds
The build takes about 10 seconds longer for the larger container. While this penalty is only paid on the initial build, your Continuous Integration system could pay this price with every build.

The next test is to push the containers to a remote registry. For this test, I used Container Registry to store the images.

Push:
Go Onbuild: 15 Seconds
Go Multistage: 14 Seconds
Well this was interesting! Why does it take the same amount of time to push a 12MB object and a 700MB object? Turns out that Container Registry uses a lot of tricks under the covers, including a global cache for many popular base images.

Finally, I want to test how long it takes to pull the image from the registry to my local machine.

Pull:
Go Onbuild: 26 Seconds
Go Multistage: 6 Seconds
At 20 seconds, this is the biggest difference between using the two different container images. You can start to see the advantage of using a smaller image, especially if you pull images often.

You can also build the containers in the cloud using Container Builder, which has the added benefit of automatically storing them in Container Registry.

Build + Push:
Go Onbuild: 25 Seconds
Go Multistage: 20 Seconds
So again, there is a small advantage to using the smaller image, but not as dramatic as I would have expected.

Building images on small machines


So is there an advantage for using smaller containers? If you have a powerful laptop with a fast internet connection and/or Container Builder, not really. However, the story changes if you’re using less powerful machines. To simulate this, I used a modest Google Compute Engine f1-micro VM to build, push and pull these images, and the results are staggering!

Pull:
Go Onbuild: 52 seconds
Go Multistage: 6 seconds
Build:
Go Onbuild: 54 seconds
Go Multistage: 28 seconds
Push:
Go Onbuild: 48 Seconds
Go Multistage: 16 seconds
In this case, using smaller containers really helps!

Pulling on Kubernetes


While you might not care about the time it takes to build and push the container, you should really care about the time it takes to pull the container. When it comes to Kubernetes, this is probably the most important metric for your production cluster.

For example, let’s say you have a three-node cluster, and one of the node crashes. If you are using a managed system like Kubernetes Engine, the system automatically spins up a new node to take its place.

However, this new node will be completely fresh, and will have to pull all your containers before it can start working. The longer it takes to pull the containers, the longer your cluster isn’t performing as well as it should!

This can occur when you increase your cluster size (for example, using Kubernetes Engine Autoscaling), or upgrade your nodes to a new version of Kubernetes (stay tuned for a future episode on this).

We can see that the pull performance of multiple containers from multiple deployments can really add up here, and using small containers can potentially shave minutes from your deployment times!

Security and vulnerabilities


Aside from performance, there are significant security benefits from using smaller containers. Small containers usually have a smaller attack surface as compared to containers that use large base images.

I built the Go “onbuild” and “multistage” containers a few months ago, so they probably contain some vulnerabilities that have since been discovered. Using Container Registry’s built-in Vulnerability Scanning, it’s easy to scan your containers for known vulnerabilities. Let’s see what we find.

Wow, that’s a big difference between the two! Only three “medium” vulnerabilities in the smaller container, compared with 16 critical and over 300 other vulnerabilities in the larger container.

Let’s drill down and see which issues the larger container has.

You can see that most of the issues have nothing to do with our app, but rather programs that we are not even using! Because the multistage image is using a much smaller base image, there are just fewer things that can be compromised.

Conclusion

The performance and security advantages of using small containers speak for themselves. Using a small base image and the “builder pattern” can make it easier to build small images, and there are many other techniques for individual stacks and programming languages to minimize container size as well. Whatever you do, you can be sure that your efforts to keep your containers small are well worth it!

Check in next week when we’ll talk about using Kubernetes namespaces to isolate clusters from one another. And don’t forget to subscribe to our YouTube channel and Twitter for the latest updates.

If you haven’t tried GCP and our various container services before, you can quickly get started with our $300 free credits.

Content API for Shopping Roundup

There have been some smaller API updates and announcements that we'd like to let you know about!

If you have any questions or feedback about these items or any other questions about the Content API for Shopping, please let us know on the forum.

Session length controls for domains using SAML

In March, we introduced a setting that allows G Suite Business, Enterprise, and Education admins to specify the duration of web sessions for Google services (e.g. four hours, seven days, or infinite). At the time, this setting only applied to domains where Google was responsible for the login (i.e. where Google was the Identity Provider). We’re now extending the reach of this setting and making it applicable in domains that federate to another Identity Provider (IdP) using SAML.


Note that these settings apply to all desktop web sessions, as well as some mobile browser sessions. Native mobile apps, like Gmail for Android and iOS, aren’t impacted by these settings.

Removing session-based cookies on May 7th, 2018

In the past, in order to give more control over session lengths to a G Suite customer’s preferred IdP, we set cookies for sessions created by federating to another IdP via SAML as transient, or session-based. These cookies were intended to expire whenever the browser was closed, meaning the user would be redirected to their primary IdP whenever they reopened the browser and visited a Google site.

Over time, however, this behavior has become increasingly inconsistent across browsers. We believe that G Suite admins are better served by explicit session length controls, like the ones we just launched. Unlike session cookies, these controls are respected regardless of the user’s browser.

With this in mind, we’ll be removing session-based cookies for G Suite customers who federate to another IdP via SAML on May 7th, 2018. Please consider setting a custom session length for your organization if your workflows depend on it.

Replicating previous behavior

If it’s critical to replicate the previous behavior, where all sessions expired when a browser was closed, you can change the browser settings on impacted machines to delete all Google cookies when the browser is exited. Instructions to configure this on Chrome can be found here. To deploy this policy on multiple machines, use Chrome policies to configure session-only cookies for [*.]google.com.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Available to G Suite Business, Enterprise, and Education editions only

Rollout pace:
Gradual rollout (up to 15 days for feature visibility)

Impact:
Admins only

Action:
Admin action suggested/FYI

More Information
Help Center: Set up session length for Google services

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

The High Five: put your hands together for this week’s search trends

Every Friday, we look back at five trending topics in Search from that week, and then give ourselves a High Five for making it to the weekend. Today we’re putting our hands together for National High Five Day—so first, a few notable “high five” trends. Then on to our regularly scheduled programming.

High Fives all around
Turns out, searches for “high five” transcend all realms of culture: sports (“Why do NBA players high five after free throws?”) entertainment (“how to high five a Sim”), and pets (“How to teach a dog to high five”). As for virtual high fives, “Scrubs,” “Seinfeld” and Liz Lemon are high five famous—they’re the top trending “high five gifs.”

A First Lady, first a mother
When former First Lady Barbara Bush passed away on Tuesday at the age of 92, people remembered her role as matriarch, searching for “Barbara Bush children,” “Barbara Bush family,” and “Barbara Bush grandchildren.” She was the second woman to be the mother and wife of a president; and searches for the first woman to hold that title, Abigail Adams (wife of John and mother of John Quincy) went up by 1,150 percent this week.

What’s Swedish for robot?
Need an extra set of hands? A team of researchers built a robot to help with one of the most challenging tasks of the modern era—assembling Ikea furniture. In an ordinary week, people might search for Ikea lamp, but for now they’re more interested in “Ikea robot.” Though Swedish meatballs are always a favorite, this week’s trending Ikea furniture items were Ikea closets, plants and sofas.

Work it, Walmart
Walmart’s store aisles are turning into runways with the new employee dress code. They can now wear jeans and–brace yourselves–anysolid color top. As for bottoms, people want to know, “Are leggings included in Walmart’s new dress code?” We never (Arkan)saw this coming, but Arkansas topped the list of regions searching for “Walmart dress code” in the U.S. For people wondering about other dress code etiquette, a trending question was “what to wear to jury duty.”

Kendrick makes history
This week people asked “Why is Kendrick Lamar important?” Listen to this: he made music history by being the first non-classical or jazz musician to win the Pulitzer Prize for Music Composition (high five, Kendrick!). And people felt the pull to search for “Kendrick Lamar prize”—interest was 900 percent higher than “Kendrick Lamar song.”

Source: Search