Beta Channel Update for Desktop

The Beta channel has been updated to 92.0.4515.107 for Windows, linux and Mac.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Srinivas Sista

Hangouts to Google Chat upgrade beginning August 16th, with option to opt-out

What’s changing 

Beginning August 16, 2021, we will start upgrading users who have the “Chat and classic Hangouts” setting selected to “Chat preferred,” unless you explicitly opt out. Users who already have “Chat only”, “Chat preferred”, or “Classic Only” selected, or users with both services turned off will not be affected. 


Additionally, the “Chat and classic Hangouts'' setting will also be removed for all users in your domain unless you opt out of the upgrade. 


If there are affected users in your domain, you will receive an email notification that contains more information and any necessary action that needs to be taken.


Who’s impacted

Admins and end users


Why it’s important

Unless you opt-out, Google Chat will replace classic Hangouts as the default chat application for your affected users


Beginning late 2021, classic Hangouts will no longer be supported and all remaining users will be migrated to Google Chat. Learn more about the Google Chat upgrade timeline.



Getting started


If you don’t take any action, all users in your organization who have the “Chat and classic Hangouts” setting selected will be automatically upgraded to “Chat preferred” and the “Chat and classic Hangouts” setting will no longer be accessible. We anticipate this migration to take around  two weeks.


No action is required if there are no users in your domain with the “Chat and classic Hangouts” service setting selected.


Additional details

The “Chat and classic Hangouts” setting will be removed from the Admin Console regardless of your selected setting, unless you opt out.  


Conversation History: With the exception of a few special cases, messages sent in classic Hangouts 1:1 and group conversations will be available in Chat. Learn more about Chat and classic Hangouts interoperability.


Direct calling: Chat doesn’t yet support direct calling in the same way as classic Hangouts. We anticipate direct calling to become available for Google Chat later this year — we will provide an update here when that feature becomes available. 


Updates to Google Workspace Public Status Dashboard and service status alerts

What’s changing 

We're introducing a new Public Status Dashboard experience for Google Workspace. As part of this update, we’re enhancing the functionality of the existing Apps outage alert system-defined rule, which provides email notifications regarding service disruptions or outages via the Public Status Dashboard. Specifically, you can now configure the rule to also deliver Apps outage alerts to the Alert Center, and you can retrieve the alerts using the Alert Center API


Who’s impacted 

Admins 


Why it’s important 

New Public Status Dashboard Experience 
Following the Google Maps Platform, the Google Workspace Status Dashboard will soon have a refreshed user interface, which will allow you to find and view important service status information faster. The location of the Public Status Dashboard will not change with this update, and it will continue to support RSS feed subscribers. 

The Google Workspace Status Dashboard will receive a new UI refresh, making it easier to view important information and updates.



Enhanced Apps outage alerts 
By bringing Apps outage alerts to the Alert Center, we are aligning with other Google Workspace alert types. The Apps outage alerts will share a familiar format to other alerts your organization may receive in the Alert Center. 

In addition to the Google Workspace Status Dashboard, you will be able to find Apps outage alerts in the Alert Center.



Additionally, we’ve updated the email notification format to contain structured information such as key issue details, the status of the affected services, and a link to the Google Workspace Status Dashboard.

We've enhanced  email notification for Apps outages to include richer information, a status of the outage, and quicklinks to more information.



Finally, the Apps outage alerts are now available via the Alert Center API and can be identified with the "AppsOutage" alert type. This will allow integration with your existing alerting or ticketing systems within your organization. 


Email Notification Sender Changes 
To align with other email notifications from the Alert Center, the sender email address used for Apps outage alert email notifications is changing from [email protected] to [email protected]


The subject of these emails will not change (it will still be "Google Workspace status alert"). Any email routing or filtering based on the old sender address should be updated accordingly. 


Getting started 


Rollout pace 

  • Rapid Release and Scheduled Release domains: Full rollout (1-3 days for feature visibility) starting on July 19, 2021. 
  • We anticipate the updated Google Workspace Status Dashboard to become available by July 21, 2021. 

Availability 

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers. 

Resources 

Explore the undeciphered writing of the Incas

Issac Newton once said, "If I have seen further it is by standing on the shoulders of giants." By adopting this age-old phrase, he acknowledged that all “new” discoveries depend on all that preceded them.

At Google, we firmly believe that history has much to teach us. For me personally, as a Latin American, I have no doubt that the native peoples who inhabited our beautiful, diverse and inspiring region left us countless treasures — many of which still patiently wait to be discovered.


The MALI Collection on Google Arts & Culture

That is why I am so pleased and proud to present the new online exhibition The Khipu Keepers on Google Arts & Culture.

“Khipus,” which means “knots” in the Quechua language, are the colorful, intricate cords made by the Incas, who inhabited some parts of South America before the Spanish colonization of the Americas. These knotted strings are still an enigma waiting to be unraveled. What secrets are hidden in these colorful knots dating back centuries? What messages from the Incas echo in these intricate cords? Could the ancestral knowledge they hold inform us about our future?

Currently, there are about 1,400 surviving khipus in private collections and museums around the world. While approximately 85% of these contain knots representing numbers, the remaining 15% are believed to be an ancient form of writing without written words on paper or stone. Researchers are still working to decipher the meanings of these coded messages.

With the exhibition launching online today, the Lima Art Museum (MALI) and Google Arts & Culture are opening a window into one of the greatest mysteries the Inca people left behind.

By putting the centuries-old khipus on display online for the first time, this exhibition will let people from across the world engage with the fantastic legacy of the Inca civilization. Yet even more importantly, by creating a digital record of these enigmatic treasures that still have stories to tell, we are also preserving them forever. In this sense, The Khipu Keepers is also a first step of a promising journey for researchers to find new opportunities thanks to the power of technologies such as digitization. 

Track down the history of khipus to Latin America’s first empire in the words of anthropologist Dr. Sabine Hyland, and listen to St. Andrews researcher Manny Medrano as he answers the most pressing questions about what we know of khipus. Watch an intro to the basic components of a khipu and what experts have discovered so far, or explore the Attendance Board that provides a rare connection between words and cords. Zoom into a large double khipu and learn about what it takes to conserve the khipus from the Temple Radicati collection.


Seven interesting facts about the enigmatic khipus

  1. The Quechua word “khipu” means knot.
  2. The pre-Columbian khipus were made of camelid hair or cotton fiber.
  3. The Incas used three types of knots: single, long and figure-eight.
  4. The colors of the khipu cords have different meanings.
  5. The distance between the knots also has a meaning and conveys a message.
  6. A cord without knots represents the number zero.
  7. Of all the known khipus, 85% convey numerical values and the remaining 15% are believed to tell stories.


From Latin America to the world

As seen in other Google Arts & Culture projects likeWoolaroo andFabricius, technology can be a powerful tool in the hands of researchers to preserve, research and understand the legacy of the ancient cultures and communities who came before us.

For the “The Khipu Keepers,” researchers are once again the ones entrusted with “untangling” this chapter of our past and providing us with answers. They now know that they are not alone in this endeavor and that Google technologies can help them delve deeper into elements of history.

Give it up for the woman who helps Googlers give back

Over the past month, Googlers around the world have virtually volunteered in their communities — from mentoring students to reviewing resumes for job seekers. It’s all a part of GoogleServe, our month-long campaign that encourages Googlers to lend their time and expertise to others. GoogleServe is just one of many opportunities employees have to give back, and one of the projects that Megan Colla Wheeler is responsible for running. 

As the lead for Google.org’s global employee giving and volunteering campaigns, Megan’s role is to create and run programs like GoogleServe and connect the nearly 150,000 Googlers around the world to them. Ultimately, her job is to help Googlers dedicate their time, money or expertise to their communities. How’s that for paying it forward?

With more than ten years of experience at Google, we wanted to hear more about how she ended up in this job, her advice to others and all the ways volunteering at Google has changed — particularly this past year. 


How do you explain your job to friends?

My goal is to create meaningful ways for Googlers to contribute to their communities — by offering their time, expertise or money — and help connect them to those opportunities. 


When did you realize you were interested in philanthropy and volunteering?

I was a Kinesiology major in college. Toward the end of my sophomore year, I took a course on social justice and it struck a chord in me. Though I loved sports, I realized I wanted my career to be about something bigger, something meaningful. I wanted to lend my skills for good. So even though I graduated with a kinesiology major, I focused my job search on the nonprofit sector and got a job working for a nonprofit legal organization.


How did you go from there to leading volunteer programs for Google.org?

I never knew that the job I have now was even possible. I left my nonprofit job to become a recruiting coordinator at Google. My plan was to do it for a year, diversify my skills, then go back to the nonprofit world. 

I remember going to my first GoogleServe event. We helped paint and organize a senior citizen community center — all during the workday! It blew me away that Google placed such an importance on volunteering. Coming from the nonprofit world, it felt meaningful seeing a company that cares deeply about these things and encourages employees to get involved. So I stayed at Google and kept finding ways to work on these programs. 


Fast forward 10 years and you’re one of the masterminds behind these events. How has employee volunteering and giving at Google changed over the years?

So many of the things that Google has created, like Gmail, came out of grassroots ideas that then grew as the company did. The same is true of our work to help Googlers get involved in their communities. 


Take GoogleServe for example. In 2008, a Googler came up with the idea to create a company day of service. Over a decade later that campaign has gone from a day-long event to a month of service that encourages over 25,000 employees to volunteer in over 90 offices around the world. And it all started with one Googler saying, "This would be a cool idea." Along the way, more Googlers have come up with ideas to get involved in the communities where we live and work through giving and volunteering. Although the programs have grown and evolved over the years, we’ve maintained the sentiment that inspired those campaigns in the first place.


We’ve also been focused on connecting Googlers to opportunities that use their distinct skills, like coding or data analysis. For example, a team of Googlers - including software engineers, program managers, and UX designers - are currently working with the City of Detroit to help build a mobile-friendly search tool to help people find affordable housing. 


How has it changed in the past year?

At the core, these programs are about giving back, but they’re also culturally iconic moments at Google. They’re a chance for teams to connect and do something together that’s more than just your average team-building activity. You’re building a shared experience and meeting people from completely different roles and departments. They’re also a chance for teams to learn and grow from people outside of Google and to bring that perspective back to their job. 


Over the past year, people have felt generally disconnected. So even though our volunteering has become virtual, it’s still a chance to interact and contribute. Virtual or not, it really does create a positive work culture. 


What advice would you give to people who have a day job in one area and a passion in another?

Be willing to work hard and get your core job done and carve out time to keep doing what you’re passionate about. When you are working on projects that you love, it keeps you engaged in a really special way. And you never know when those passion projects will intersect with your core work, or when they’ll turn into something bigger. 


Allowing developers to apply for more time to comply with Play Payments Policy

Posted by Purnima Kochikar, VP Play Partnerships

Every day we work with developers to help make Google Play a safe, secure and seamless experience for everyone, and to ensure that developers can build sustainable businesses. Last September, we clarified our Payments Policy to be more explicit about when developers should use Google Play’s billing system. While most developers already complied with this policy, we understood that some existing apps currently using an alternative billing system may need to make changes to their apps, and we gave one year for them to make these updates.

Many of our partners have been making steady progress toward the September 30 deadline. However, we continue to hear from developers all over the world that the past year has been particularly difficult, especially for those with engineering teams in regions that continue to be hard hit by the effects of the global pandemic, making it tougher than usual for them to make the technical updates related to this policy.

After carefully considering feedback from both large and small developers, we are giving developers an option to request a 6-month extension, which will give them until March 31, 2022 to comply with our Payments policy. Starting on July 22nd, developers can appeal for an extension through the Help Center and we will review each request and get back to requests as soon as possible.

Check out the Help Center and the Policy Center for details, timelines, and frequently asked questions. You can also check out Play Academy or watch the PolicyBytes video for additional information.

High Fidelity Image Generation Using Diffusion Models

Natural image synthesis is a broad class of machine learning (ML) tasks with wide-ranging applications that pose a number of design challenges. One example is image super-resolution, in which a model is trained to transform a low resolution image into a detailed high resolution image (e.g., RAISR). Super-resolution has many applications that can range from restoring old family portraits to improving medical imaging systems. Another such image synthesis task is class-conditional image generation, in which a model is trained to generate a sample image from an input class label. The resulting generated sample images can be used to improve performance of downstream models for image classification, segmentation, and more.

Generally, these image synthesis tasks are performed by deep generative models, such as GANs, VAEs, and autoregressive models. Yet each of these generative models has its downsides when trained to synthesize high quality samples on difficult, high resolution datasets. For example, GANs often suffer from unstable training and mode collapse, and autoregressive models typically suffer from slow synthesis speed.

Alternatively, diffusion models, originally proposed in 2015, have seen a recent revival in interest due to their training stability and their promising sample quality results on image and audio generation. Thus, they offer potentially favorable trade-offs compared to other types of deep generative models. Diffusion models work by corrupting the training data by progressively adding Gaussian noise, slowly wiping out details in the data until it becomes pure noise, and then training a neural network to reverse this corruption process. Running this reversed corruption process synthesizes data from pure noise by gradually denoising it until a clean sample is produced. This synthesis procedure can be interpreted as an optimization algorithm that follows the gradient of the data density to produce likely samples.

Today we present two connected approaches that push the boundaries of the image synthesis quality for diffusion models — Super-Resolution via Repeated Refinements (SR3) and a model for class-conditioned synthesis, called Cascaded Diffusion Models (CDM). We show that by scaling up diffusion models and with carefully selected data augmentation techniques, we can outperform existing approaches. Specifically, SR3 attains strong image super-resolution results that surpass GANs in human evaluations. CDM generates high fidelity ImageNet samples that surpass BigGAN-deep and VQ-VAE2 on both FID score and Classification Accuracy Score by a large margin.

SR3: Image Super-Resolution
SR3 is a super-resolution diffusion model that takes as input a low-resolution image, and builds a corresponding high resolution image from pure noise. The model is trained on an image corruption process in which noise is progressively added to a high-resolution image until only pure noise remains. It then learns to reverse this process, beginning from pure noise and progressively removing noise to reach a target distribution through the guidance of the input low-resolution image..

With large scale training, SR3 achieves strong benchmark results on the super-resolution task for face and natural images when scaling to resolutions 4x–8x that of the input low-resolution image. These super-resolution models can further be cascaded together to increase the effective super-resolution scale factor, e.g., stacking a 64x64 → 256x256 and a 256x256 → 1024x1024 face super-resolution model together in order to perform a 64x64 → 1024x1024 super-resolution task.

We compare SR3 with existing methods using human evaluation study. We conduct a Two-Alternative Forced Choice Experiment where subjects are asked to choose between the reference high resolution image, and the model output when asked the question, “Which image would you guess is from a camera?” We measure the performance of the model through confusion rates (% of time raters choose the model outputs over reference images, where a perfect algorithm would achieve a 50% confusion rate). The results of this study are shown in the figure below.

Above: We achieve close to 50% confusion rate on the task of 16x16 → 128x128 faces, outperforming state-of-the-art face super-resolution methods PULSE and FSRGAN. Below: We also achieve a 40% confusion rate on the much more difficult task of 64x64 → 256x256 natural images, outperforming the regression baseline by a large margin.

CDM: Class-Conditional ImageNet Generation
Having shown the effectiveness of SR3 in performing natural image super-resolution, we go a step further and use these SR3 models for class-conditional image generation. CDM is a class-conditional diffusion model trained on ImageNet data to generate high-resolution natural images. Since ImageNet is a difficult, high-entropy dataset, we built CDM as a cascade of multiple diffusion models. This cascade approach involves chaining together multiple generative models over several spatial resolutions: one diffusion model that generates data at a low resolution, followed by a sequence of SR3 super-resolution diffusion models that gradually increase the resolution of the generated image to the highest resolution. It is well known that cascading improves quality and training speed for high resolution data, as shown by previous studies (for example in autoregressive models and VQ-VAE-2) and in concurrent work for diffusion models. As demonstrated by our quantitative results below, CDM further highlights the effectiveness of cascading in diffusion models for sample quality and usefulness in downstream tasks, such as image classification.

Example of the cascading pipeline that includes a sequence of diffusion models: the first generates a low resolution image, and the rest perform upsampling to the final high resolution image. Here the pipeline is for class-conditional ImageNet generation, which begins with a class-conditional diffusion model at 32x32 resolution, followed by 2x and 4x class-conditional super-resolution using SR3.
Selected generated images from our 256x256 cascaded class-conditional ImageNet model.

Along with including the SR3 model in the cascading pipeline, we also introduce a new data augmentation technique, which we call conditioning augmentation, that further improves the sample quality results of CDM. While the super-resolution models in CDM are trained on original images from the dataset, during generation they need to perform super-resolution on the images generated by a low-resolution base model, which may not be of sufficiently high quality in comparison to the original images. This leads to a train-test mismatch for the super-resolution models. Conditioning augmentation refers to applying data augmentation to the low-resolution input image of each super-resolution model in the cascading pipeline. These augmentations, which in our case include Gaussian noise and Gaussian blur, prevents each super-resolution model from overfitting to its lower resolution conditioning input, eventually leading to better higher resolution sample quality for CDM.

Altogether, CDM generates high fidelity samples superior to BigGAN-deep and VQ-VAE-2 in terms of both FID score and Classification Accuracy Score on class-conditional ImageNet generation. CDM is a pure generative model that does not use a classifier to boost sample quality, unlike other models such as ADM and VQ-VAE-2. See below for quantitative results on sample quality.

Class-conditional ImageNet FID scores at the 256x256 resolution for methods that do not use extra classifiers to boost sample quality. BigGAN-deep is reported at its best truncation value. (Lower is better.)
ImageNet classification accuracy scores at the 256x256 resolution, measuring the validation set accuracy of a classifier trained on generated data. CDM generated data attains significant gains over existing methods, closing the gap in classification accuracy between real and generated data. (Higher is better.)

Conclusion
With SR3 and CDM, we have pushed the performance of diffusion models to state-of-the-art on super-resolution and class-conditional ImageNet generation benchmarks. We are excited to further test the limits of diffusion models for a wide variety of generative modeling problems. For more information on our work, please visit Image Super-Resolution via Iterative Refinement and Cascaded Diffusion Models for High Fidelity Image Generation.

Acknowledgements:
We thank our co-authors William Chan, Mohammad Norouzi, Tim Salimans, and David Fleet, and we are grateful for research discussions and assistance from Ben Poole, Jascha Sohl-Dickstein, Doug Eck, and the rest of the Google Research, Brain Team. Thanks to Tom Small for helping us with the animations.

Source: Google AI Blog


Polishing up emoji and making them easier to share

We talk a lot about the most frequently used emoji — ?, ?,❤️... But what about ?? Who will speak for ?? With over 3,521 emoji, there are a lot you have to scroll past to get to ?. While working from home, plus the delay of Unicode’s next emoji release, we had some time to reflect and answer last year’s seemingly rhetorical question: What does World Emoji Day look like without new emoji?


Well, it looks like giving some love to hundreds of emoji already on your keyboard — focusing on making them more universal, accessible and authentic — so that you can find an all-new fav emoji (I'm fond of ??). And, you can find all of these emoji (yes, including the king, ?) across more of Google’s platforms including Android, Gmail, Chat, Chrome OS and YouTube.

Emoji for everyone

Emoji have a global audience and it’s important for them to be globally relevant. Pie emoji is a curious one — it previously looked like a very specific American pumpkin pie (a family favorite!). Now it’s something everyone recognizes. I could crack a joke about how there’s more food to go around but it's not really a joke: This minor change means this one emoji can represent a whole host of pies — apple pie, blueberry pie, strawberry pie, cherry pie, chicken pot pie, beef and mushroom…the list goes on.

Animation of pie emoji changing from a slice o a whole pie

Have you ever wondered why an emoji looks the way it does? Like, the bikini emoji ? — does it really need an invisible ghost wearing it? Now, any body is a bikini body.


Animation of bikini emoji changing to new design

Other emoji changes are long overdue. This year has been eye-opening, and now, so is the face mask emoji ?. This emoji originated in Japan, where people regularly wear masks even before the COVID-19 pandemic. Today, masking is a universal way of showing kindness to others.

Animation of mask emoji opening it's eyes and blinking


Emoji you can’t miss

When designing emoji, you often have to exaggerate sizes. Our transportation emoji are now easier to see since the new designs allow them to take advantage of the small space they occupy.

Animation of emoji cars changing to their new design

Emoji that get the job done

However, sometimes deviating too far from reality means an emoji comes along and taunts you, haunting your dreams. Oh, that doesn’t happen to you? Just me? Well, when I close my eyes I see the scissors emoji (✂️). I know it’s just an emoji and doesn’t need to be able to actually cut things…but the new one can!

Animation of scissors emoji changing to new design and closing blades

One of the perks of the job is that I get to learn all kinds of things — like the history of accordion design ?, the anatomy of an octopus, how parachutes work! As someone who never learned to drive, it took designing emoji to learn that the yellow painted lines on the road tell you to stay on the right of the yellow line. But, how can you stay on the right of the yellow line if the road is flanked by yellow lines? Well, our new design for motorway ?️ will pass its next driving exam.

Animation of motorway changing to new design

Other emoji just needed to be cooked a bit longer ? (or in some cases, dropped in the fryer).

Animation of food emoji  (croissant, rice, bacon, tampura) changing to new designs

Emoji that keep you company at night

If you look close enough, you might also notice a few additions when you switch over to dark theme in a few of the new designs.

Animation of camping emoji changing to dark them with new stars

Emoji that show up in more places than ever before

Android 12 will include all of these emoji when it rolls out this fall ??. And to make it easier for everyone to see emoji ? no matter how old your phone is or when your favorite messaging app updates, starting with Android 12, all apps that support Appcompat will automagically get the latest and greatest emoji ?. Now developers don’t have to write code to display cute baby seals ???.


Can’t wait until the fall? Beginning this month, you will be able to send ? and receive ? emoji in Gmail and Chat without fear they will appear broken ?. Have a Chromebook ? ? We’ve got you covered ☔ with a shiny new emoji picker coming this month. Watching your favorite creator on YouTube and chatting in the live Chat ? ? Send as many ? emoji as you like later this year.