Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 102 (102.0.5005.58) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Erhu Akpobaro
Google Chrome

Augmented reality brings fine art to life for International Museum Day

Have you ever dreamt of having your portrait taken by a world-famous artist? Or wished a painting would come to life before your eyes? This International Museum Day, we’re unveiling three new Art Filter options via the Google Arts & Culture app so that you can immerse yourself in iconic paintings by Vincent van Gogh, Grant Wood, and Fernando Botero.

Our 3D-modeled augmented reality filter for Starry Night is a creative new twist on our previous Art Filter options and reflects how we continue to innovate with technology. Responding to the evocative atmosphere of Van Gogh’s masterpiece, it lets you set the night sky’s swirling winds and dazzling stars in motion. These filters are possible thanks to our partners in New York, Bogotá, and around the world who make their astonishing collections available online via Google Arts & Culture.

In another first for Art Filter, we’ve introduced face-mirroring effects to Grant Wood’s definitive depiction of midwestern America. See the figures of this celebrated double-portrait in a new light by interacting with both simultaneously. Perhaps you’ll put a smile on their famously long faces? Fernando Botero’s La primera dama, by contrast, needs no cheering up. This voluminous figure captures the Columbian artist’s inimitable Boterismo style in all its vibrancy and humor. Each of our three new Art Filter options draws inspiration from the paintings themselves to make these extraordinary artworks fun and educational for everyone.

Museums exist to preserve and celebrate art and culture. Using immersive, interactive technology, we aim to make these vital institutions more accessible. More than 60 museums from over 15 countries have joined Google Arts & Culture in 2022, joining more than 2000 existing partners to share their new collections and stories.

You can flick through the history of manga, tune into Bob Marley’s positive vibrations, tour an Argentinian palace, and hear powerful oral histories from Black Britain. In addition to art-inspired Art Filter options, you can also explore space, air, and sea with Neil Armstrong’s space suit, Amelia Earhart’s Lockheed Vega 5B, or a deep-sea diving helmet.

The Google Arts & Culture app is available to download for Android or iOS. Tap the Camera icon to immerse yourself in Art Filter (g.co/artfilter), get creative with Art Transfer, find a pawfect match for your animal companion, and more. From the beauty of India’s celebrated crafts to terracotta toys for Greco-Roman children, we hope it will inspire you to explore and interact with incredible artifacts from around the globe and across history.

Beta Channel Update for Desktop

The Beta channel has been updated to 102.0.5005.61 for Windows,Mac and Linux.

A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Prudhvikumar BommanaGoogle Chrome

AppSheet Enterprise Standard and Enterprise Plus available as add-ons to Google Workspace editions

Quick summary 

Google Workspace customers can now purchase AppSheet Enterprise Standard and Enterprise Plus as add-ons by contacting their Google Cloud sales representative or through the Google Workspace Partner network. 


AppSheet allows users to maximize Google Workspace by building custom applications on top of Google Workspace and other services in their environment, all without writing any code. With AppSheet Enterprise, customers can enable advanced scenarios, adding advanced connectivity, more scale, and strengthened governance.


Getting started



Availability

  • Available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Starter, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, the Teaching and Learning Upgrade, Frontline, and Nonprofits, as well as legacy G Suite Basic and Business customers.

Resources

How to use App Engine Memcache in Flask apps (Module 12)

Posted by Wesley Chun

Background

In our ongoing Serverless Migration Station series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another objective is to help developers learn how to move away from App Engine legacy APIs (now called "bundled services") to Cloud standalone equivalent services. Once this has been accomplished, apps are much more portable, making them flexible enough to:

In today's Module 12 video, we're going to start our journey by implementing App Engine's Memcache bundled service, setting us up for our next move to a more complete in-cloud caching service, Cloud Memorystore. Most apps typically rely on some database, and in many situations, they can benefit from a caching layer to reduce the number of queries and improve response latency. In the video, we add use of Memcache to a Python 2 app that has already migrated web frameworks from webapp2 to Flask, providing greater portability and execution options. More importantly, it paves the way for an eventual 3.x upgrade because the Python 3 App Engine runtime does not support webapp2. We'll cover both the 3.x and Cloud Memorystore ports next in Module 13.

Got an older app needing an update? We can help with that.

Adding use of Memcache

The sample application registers individual web page "visits," storing visitor information such as the IP address and user agent. In the original app, these values are stored immediately, and then the most recent visits are queried to display in the browser. If the same user continuously refreshes their browser, each refresh constitutes a new visit. To discourage this type of abuse, we cache the same user's visit for an hour, returning the same cached list of most recent visits unless a new visitor arrives or an hour has elapsed since their initial visit.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how each visit is registered. After the update, the app attempts to fetch these visits from the cache. If cached results are available and "fresh" (within the hour), they're used immediately, but if cache is empty, or a new visitor arrives, the current visit is stored as before, and this latest collection of visits is cached for an hour. The bolded lines represent the new code that manages the cached data.

Adding App Engine Memcache usage to sample app

Wrap-up

Today's "migration" began with the Module 1 sample app. We added a Memcache-based caching layer and arrived at the finish line with the Module 12 sample app. To practice this on your own, follow the codelab doing it by-hand while following the video. The Module 12 app will then be ready to upgrade to Cloud Memorystore should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Cloud Memorystore, stay tuned for the Module 13 video or try its codelab to get a sneak peek. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we hope to one day cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Vector-Quantized Image Modeling with Improved VQGAN

In recent years, natural language processing models have dramatically improved their ability to learn general-purpose representations, which has resulted in significant performance gains for a wide range of natural language generation and natural language understanding tasks. In large part, this has been accomplished through pre-training language models on extensive unlabeled text corpora.

This pre-training formulation does not make assumptions about input signal modality, which can be language, vision, or audio, among others. Several recent papers have exploited this formulation to dramatically improve image generation results through pre-quantizing images into discrete integer codes (represented as natural numbers), and modeling them autoregressively (i.e., predicting sequences one token at a time). In these approaches, a convolutional neural network (CNN) is trained to encode an image into discrete tokens, each corresponding to a small patch of the image. A second stage CNN or Transformer is then trained to model the distribution of encoded latent variables. The second stage can also be applied to autoregressively generate an image after the training. But while such models have achieved strong performance for image generation, few studies have evaluated the learned representation for downstream discriminative tasks (such as image classification).

In “Vector-Quantized Image Modeling with Improved VQGAN”, we propose a two-stage model that reconceives traditional image quantization techniques to yield improved performance on image generation and image understanding tasks. In the first stage, an image quantization model, called VQGAN, encodes an image into lower-dimensional discrete latent codes. Then a Transformer model is trained to model the quantized latent codes of an image. This approach, which we call Vector-quantized Image Modeling (VIM), can be used for both image generation and unsupervised image representation learning. We describe multiple improvements to the image quantizer and show that training a stronger image quantizer is a key component for improving both image generation and image understanding.

Vector-Quantized Image Modeling with ViT-VQGAN
One recent, commonly used model that quantizes images into integer tokens is the Vector-quantized Variational AutoEncoder (VQVAE), a CNN-based auto-encoder whose latent space is a matrix of discrete learnable variables, trained end-to-end. VQGAN is an improved version of this that introduces an adversarial loss to promote high quality reconstruction. VQGAN uses transformer-like elements in the form of non-local attention blocks, which allows it to capture distant interactions using fewer layers.

In our work, we propose taking this approach one step further by replacing both the CNN encoder and decoder with ViT. In addition, we introduce a linear projection from the output of the encoder to a low-dimensional latent variable space for lookup of the integer tokens. Specifically, we reduced the encoder output from a 768-dimension vector to a 32- or 8-dimension vector per code, which we found encourages the decoder to better utilize the token outputs, improving model capacity and efficiency.

Overview of the proposed ViT-VQGAN (left) and VIM (right), which, when working together, is capable of both image generation and image understanding. In the first stage, ViT-VQGAN converts images into discrete integers, which the autoregressive Transformer (Stage 2) then learns to model. Finally, the Stage 1 decoder is applied to these tokens to enable generation of high quality images from scratch.

With our trained ViT-VQGAN, images are encoded into discrete tokens represented by integers, each of which encompasses an 8x8 patch of the input image. Using these tokens, we train a decoder-only Transformer to predict a sequence of image tokens autoregressively. This two-stage model, VIM, is able to perform unconditioned image generation by simply sampling token-by-token from the output softmax distribution of the Transformer model.

VIM is also capable of performing class-conditioned generation, such as synthesizing a specific image of a given class (e.g., a dog or a cat). We extend the unconditional generation to class-conditioned generation by prepending a class-ID token before the image tokens during both training and sampling.

Uncurated set of dog samples from class-conditioned image generation trained on ImageNet. Conditioned classes: Irish terrier, Norfolk terrier, Norwich terrier, Yorkshire terrier, wire-haired fox terrier, Lakeland terrier.

To test the image understanding capabilities of VIM, we also fine-tune a linear projection layer to perform ImageNet classification, a standard benchmark for measuring image understanding abilities. Similar to ImageGPT, we take a layer output at a specific block, average over the sequence of token features (frozen) and insert a softmax layer (learnable) projecting averaged features to class logits. This allows us to capture intermediate features that provide more information useful for representation learning.

Experimental Results
We train all ViT-VQGAN models with a training batch size of 256 distributed across 128 CloudTPUv4 cores. All models are trained with an input image resolution of 256x256. On top of the pre-learned ViT-VQGAN image quantizer, we train Transformer models for unconditional and class-conditioned image synthesis and compare with previous work.

We measure the performance of our proposed methods for class-conditioned image synthesis and unsupervised representation learning on the widely used ImageNet benchmark. In the table below we demonstrate the class-conditioned image synthesis performance measured by the Fréchet Inception Distance (FID). Compared to prior work, VIM improves the FID to 3.07 (lower is better), a relative improvement of 58.6% over the VQGAN model (FID 7.35). VIM also improves the capacity for image understanding, as indicated by the Inception Score (IS), which goes from 188.6 to 227.4, a 20.6% improvement relative to VQGAN.

Model Acceptance
Rate
FID IS

Validation data 1.0 1.62 235.0

DCTransformer 1.0 36.5 N/A
BigGAN 1.0 7.53 168.6
BigGAN-deep 1.0 6.84 203.6
IDDPM 1.0 12.3 N/A
ADM-G, 1.0 guid. 1.0 4.59 186.7
VQVAE-2 1.0 ~31 ~45

VQGAN 1.0 17.04 70.6
VQGAN 0.5 10.26 125.5
VQGAN 0.25 7.35 188.6
ViT-VQGAN (Ours) 1.0 4.17 175.1
ViT-VQGAN (Ours) 0.5 3.04 227.4
Fréchet Inception Distance (FID) comparison between different models for class-conditional image synthesis and Inception Score (IS) for image understanding, both on ImageNet with resolution 256x256. The acceptance rate shows results filtered by a ResNet-101 classification model, similar to the process in VQGAN.

After training a generative model, we test the learned image representations by fine-tuning a linear layer to perform ImageNet classification, a standard benchmark for measuring image understanding abilities. Our model outperforms previous generative models on the image understanding task, improving classification accuracy through linear probing (i.e., training a single linear classification layer, while keeping the rest of the model frozen) from 60.3% (iGPT-L) to 73.2%. These results showcase VIM’s strong generation results as well as image representation learning abilities.

Conclusion
We propose Vector-quantized Image Modeling (VIM), which pretrains a Transformer to predict image tokens autoregressively, where discrete image tokens are produced from improved ViT-VQGAN image quantizers. With our proposed improvements on image quantization, we demonstrate superior results on both image generation and understanding. We hope our results can inspire future work towards more unified approaches for image generation and understanding.

Acknowledgements
We would like to thank Xin Li, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, Yonghui Wu for the preparation of the VIM paper. We thank Wei Han, Yuan Cao, Jiquan Ngiam‎, Vijay Vasudevan, Zhifeng Chen and Claire Cui for helpful discussions and feedback, and others on the Google Research and Brain Team for support throughout this project.

Source: Google AI Blog


Export log data in near-real time to BigQuery

Quick summary 

Currently, you can export Google Workspace logs to Google BigQuery for customized and scalable reporting. Exports take place as a daily sync, returning log data that can be up to three days old. With this launch, exported log data streams will be near-real time (under 10 minutes), ensuring fresh data for your export. This helps you stay on top of security threats and analysis with the most up-to-date activity log data. 



Stream activity log data in near-real time when using BigQuery export




Getting started 

  • Admins: This feature works automatically if you have set up service log exports to BigQuery. There is no additional admin control for this feature. 
  • End users: There is no end user impact. 

Rollout pace 


Availability 

  • Available to Google Workspace Enterprise Standard, Enterprise Plus, Education Standard, and Education Plus Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Education Fundamentals, Frontline, and Nonprofits, as well as legacy G Suite Basic and Business customers 
  • Not available to users with personal Google Accounts 

Resources 

New and updated third-party DevOps integrations for Google Chat, including PagerDuty

What’s changing

We’re introducing and updating a variety of additional DevOps integrations, which will allow you to take action on common workflows directly in Google Chat: 

  • Apps such as Google Cloud Build, Asana, GitHub, Jenkins and more, have been updated with new functionality: 
    • Using Slash commands for quick actions such as creating a new Asana task or triggering a build in Jenkins or Google Cloud Build. 
    • Ability to use Dialogs for important flows such as setting up the app, or entering detailed info such as creating a GitHub issue. 
  • Operations and incident response professionals can use the new PagerDuty integration to take action on PagerDuty incidents from Chat. From Chat, you’ll be able to: 
  • Receive notifications of PagerDuty incidents right from Google Chat. 
  • Take action, including acknowledging and resolving incidents without leaving the conversation. 



You can find these integrations and a complete list of other Google-developed Chat apps here


Who’s impacted 

Admins and end users 


Why you’d use them 

We hope these additional third-party integrations within Chat help you collaborate and get work done faster by eliminating the need to switch between various apps and browser tabs. 


Additional details 

We plan to introduce the ability to create dedicated spaces to collaborate with teammates on important incidents to resolve them quickly, with the right people. We will provide an update on the Workspace Updates blog when that functionality becomes available. 


Getting started 


Rollout pace 


Availability 

  • Available to Google Workspace customers, as well as legacy G Suite Basic and Business customers 

Resources 

Privileged pod escalations in Kubernetes and GKE



At the KubeCon EU 2022 conference in Valencia, security researchers from Palo Alto Networks presented research findings on “trampoline pods”—pods with an elevated set of privileges required to do their job, but that could conceivably be used as a jumping off point to gain escalated privileges.

The research mentions GKE, including how developers should look at the privileged pod problem today, what the GKE team is doing to minimize the use of privileged pods, and actions GKE users can take to protect their clusters.

Privileged pods within the context of GKE security

While privileged pods can pose a security issue, it’s important to look at them within the overall context of GKE security. To use a privileged pod as a “trampoline” in GKE, there is a major prerequisite – the attacker has to first execute a successful application compromise and container breakout attack.

Because the use of privileged pods in an attack requires a first step such as a container breakout to be effective, let’s look at two areas:
  1. features of GKE you can use to reduce the likelihood of a container breakout
  2. steps the GKE team is taking to minimize the use of privileged pods and the privileges needed in them.
Reducing container breakouts

There are a number of features in GKE along with some best practices that you can use to reduce the likelihood of a container breakout:

More information can be found in the GKE Hardening Guide.

How GKE is reducing the use of privileged pods.

While it’s not uncommon for customers to install privileged pods into their clusters, GKE works to minimize the privilege levels held by our system components, especially those that are enabled by default. However, there are limits as to how many privileges can be removed from certain features. For example, Anthos Config Management requires permissions to modify most Kubernetes objects to be able to create and manage those objects.

Some other privileges are baked into the system, such as those held by Kubelet. Previously, we worked with the Kubernetes community to build the Node Restriction and Node Authorizer features to limit Kubelet's access to highly sensitive objects, such as secrets, adding protection against an attacker with access to the Kubelet credentials.

More recently, we have taken steps to reduce the number of privileged pods across GKE and have added additional documentation on privileges used in system pods as well as information on how to improve pod isolation. Below are the steps we’ve taken:
  1. We have added an admission controller to GKE Autopilot and GKE Standard (on by default) and GKE/Anthos (opt-in) that stops attempts to run as a more privileged service account, which blocks a method of escalating privileges using privileged pods.
  2. We created a permission scanning tool that identifies pods that have privileges that could be used for escalation, and we used that tool to perform an audit across GKE and Anthos.
  3. The permission scanning tool is now integrated into our standard code review and testing processes to reduce the risk of introducing privileged pods into the system. As mentioned earlier, some features require privileges to perform their function.
  4. We are using the audit results to reduce permissions available to pods. For example, we removed “update nodes and pods” permissions from anetd in GKE.
  5. Where privileged pods are required for the operation of a feature, we’ve added additional documentation to illustrate that fact.
  6. We added documentation that outlines how to isolate GKE-managed workloads in dedicated node pools when you’re unable to use GKE Sandbox to reduce the risk of privilege escalation attacks.
In addition to the measures above, we recommend users take advantage of tools that can scan RBAC settings to detect overprivileged pods used in their applications. As part of their presentation, the Palo Alto researchers announced an open source tool, called rbac-police, that can be used for the task. So, while it only takes a single overprivileged workload to trampoline to the cluster, there are a number of actions you can take to minimize the likelihood of the prerequisite container breakout and the number of privileges used by a pod.

Fostering inclusive spaces through Disability Alliance

I was 2 when my parents discovered I had polio, which impacted my ability to stand and walk. Growing up in China, I still remember the challenges I faced when I wanted to go to college. Back then, all potential candidates had to pass a physical test, which posed a challenge. Knowing this, my parents, my teachers and even the local government advocated for me. Thanks to their support, I was granted an exception to attend college, where I graduated with a degree in computer science.

When I joined Google in Shanghai in 2011, the real estate team was working to open a new office space. I was part of the planning process to ensure we designed an inclusive workspace, especially for individuals with physical disabilities. When I discovered the desks at the office were too high, or if the meeting space was not designed wide enough for someone in a wheelchair to enter, I worked with the team to solve the problem. I also suggested building wheelchair-accessible restrooms when they were not available on the floor I was working on.

These experiences showed me everyone has the voice to drive change — including myself. I decided to co-lead our Disability Alliance (DA), one of Google’s resource groups in China, with other passionate Googlers. We wanted to create a space to help address challenges Googlers with disabilities face, and build allyship among the wider Google community. We also wanted to create a platform to increase awareness of different forms of disabilities. For example, some people don't think about invisible disabilities, but it's equally important to build awareness of disabilities you might not immediately see. I'm incredibly excited to see how we continue to grow our community in the coming year across China.

Having a disability doesn't limit me, and I've been fortunate to be surrounded by people who value my abilities instead of my disability. Over the years, I've achieved my goals and dreams from leading an incredible team of 50 at Google, taking on physical activities such as skiing and marathons, and driving change for the broader disability community.

Male Googler in a wheelchair posing for the camera with a thumbs up. He is at a running marathon and is wearing his running attire with a race bib. Behind him are three colorful mascots from the marathon organizer.

I was ready to compete in a marathon in China back in 2021

As we commemorate Global Accessibility Awareness Day, I also spoke to Sakiko, a fellow member of our Disability Alliance chapter in Japan, to learn more about what drives her, and why it’s important that we provide equal opportunities for all.

The image shows three Google employees sitting in a conference room facing an audience. One Googler is sharing her personal experiences as a person with a disability in Japan.

Sharing my personal experience at an external event. I’m seated at the extreme right in a gray sweater.

Tell us more about yourself. What keeps you going at Google after more than nine years?

I was born with spina bifida, and I move around with crutches. I’ve always wanted to work in sales, but when I was job hunting, I was turned down by several companies because of my disabilities. I knew I had the ability and knowledge to sell, and I enjoy interacting with people, so I didn’t give up. When I interviewed at Google, the interviewers focused on my potential and abilities, and not my disability. That surprised me, because I’ve never experienced that. I recalled asking one of my interviewers if my disability would impede this opportunity, but he said, “if you have the ability to sell, it shouldn’t stop you from doing that.” It was amazing and encouraging to hear that. I currently work in the Google Ads team and have experienced various roles. When my clients shared how grateful and thankful they are for my dedicated support, that really keeps me going.

What is a memorable experience you’ve had with the Disability Alliance?

I once hosted a workshop where we invited students with disabilities to have hands-on experience coding their own web application, giving them the confidence to pursue their interest in engineering. At the end of the event, several parents shared that they didn’t know their children had the potential to code and create applications all by themselves. I still remember this day vividly, because it demonstrates everyone has the chance to shine when they are given the right opportunities to learn and develop new skills.