Our favorite Chrome extensions of 2021

All year, developers from around the world build Chrome extensions that make browsing easier, more productive and more personalized — whether you’re on the web to work, learn, play or all of the above. Today, we’re sharing our favorite extensions of the year that help people continue to virtually stay connected, get things done and have some fun along the way. Let’s take a closer look at them.

Communicate and collaborate

Whether you’re working from the office, your couch or a bit of both, extensions can help keep you connected with your teammates. Loom makes it easier to capture and share videos with others, while Mote allows you to give quick feedback through voice commenting and transcripts. Wordtune also helps you clearly communicate by rephrasing sentences and catching typos in emails and documents.

Three icons for Loom, Mote and Wordtune side by side. The leftmost icon has a blue, square background with a white starburst in the center, and underneath it’s labeled “Loom.” The middle icon has a purple, circular background with a cursive “M” in the center and underneath it’s labeled “Mote.” The rightmost icon has a dark purple, circular background with a cursive “W” and underneath it’s labeled "Wordtune.”

Stay productive

Other extensions offer new ways to stay focused and efficient. Forest gamifies productivity through virtual tree planting and rewards, and Dark Reader protects your eyes (and sleep schedule) during long workdays. Tab Manager Plus also saves you from drowning in a sea of never-ending tabs, and Nimbus Screenshot & Screen Video Recorder makes it easier to quickly screenshot and record content to share across platforms.

Four icons for Forest, Dark Reader, Tab Manager Plus and Nimbus Screenshot & Screen Video Recorder side by side. The leftmost icon has a green background with a soil and leaf in the foreground, and underneath it’s labeled “Forest.” The icon to its right has a transparent background with a dark head with glowing glasses in the foreground, and underneath it’s labeled “Dark Reader.” The icon to its right has a transparent background with three browser windows overlapping, colored yellow, green and orange, and underneath it’s labeled “Tab Manager Plus.” The rightmost icon has a transparent background with a dashed square blue outline, a blue “N” in the center and a blue plus sign in the bottom right corner. Underneath, it’s labeled “Nimbus Screenshot & Screen Video Recorder.”

Learn virtually

With education happening online more than ever before, students and teachers need helpful virtual classroom tools. Kami creates an interactive online learning space for students and teachers, and InsertLearning helps you easily take notes and integrates with Google Classroom. Meanwhile, Toucan makes learning a new language fun and immersive, and Rememberry organizes vocabulary words into flashcard decks for quick studying throughout the day.

Four icons for Kami, InsertLearning, Toucan and Rememberry side by side. The leftmost icon has a circular blue background with a white bold ‘K’ centered, and underneath it’s labeled “Kami.” The icon to its right has a dark purple, square background with a bold “IL” in the center and underneath it’s labeled “InsertLearning.” The icon to its right has a green background with a Toucan bird in the foreground, and underneath it’s labeled “Toucan.” The rightmost icon has a transparent background with a human head silhouette facing left and a circle with an arrow on one end pointing counterclockwise overlayed. Underneath, it’s labeled “Rememberry.”

Make (and save) some change

To give your browsing experience a personal twist, Stylus helps you build and install custom themes and skins for your favorite sites. And Rakuten puts cash back in your pocket by automatically finding coupons and deals across the web — particularly helpful during one of the busiest years for online shopping.

Two icons for Stylus and Rakuten side by side. The left icon has a dark blue, square background with a light blue outline and a light blue “S” in the center, and underneath it’s labeled “Stylus.” The right icon has a transparent background with a purple "R" in the center and underlined, and underneath it’s labeled “Rakuten.”

To install and learn more about these extensions, visit our Chrome Web Store Favorites of 2021 collection. And if you’re a developer looking for tips to design a high-quality Chrome extension, check out our best practices.

Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Transformer models consistently obtain state-of-the-art results in computer vision tasks, including object detection and video classification. In contrast to standard convolutional approaches that process images pixel-by-pixel, the Vision Transformers (ViT) treat an image as a sequence of patch tokens (i.e., a smaller part, or “patch”, of an image made up of multiple pixels). This means that at every layer, a ViT model recombines and processes patch tokens based on relations between each pair of tokens, using multi-head self-attention. In doing so, ViT models have the capability to construct a global representation of the entire image.

At the input-level, the tokens are formed by uniformly splitting the image into multiple segments, e.g., splitting an image that is 512 by 512 pixels into patches that are 16 by 16 pixels. At the intermediate levels, the outputs from the previous layer become the tokens for the next layer. In the case of videos, video ‘tubelets’ such as 16x16x2 video segments (16x16 images over 2 frames) become tokens. The quality and quantity of the visual tokens decide the overall quality of the Vision Transformer.

The main challenge in many Vision Transformer architectures is that they often require too many tokens to obtain reasonable results. Even with 16x16 patch tokenization, for instance, a single 512x512 image corresponds to 1024 tokens. For videos with multiple frames, that results in tens of thousands of tokens needing to be processed at every layer. Considering that the Transformer computation increases quadratically with the number of tokens, this can often make Transformers intractable for larger images and longer videos. This leads to the question: is it really necessary to process that many tokens at every layer?

In “TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?”, an earlier version of which is presented at NeurIPS 2021, we show that adaptively generating a smaller number of tokens, rather than always relying on tokens formed by uniform splitting, enables Vision Transformers to run much faster and perform better. TokenLearner is a learnable module that takes an image-like tensor (i.e., input) and generates a small set of tokens. This module could be placed at various different locations within the model of interest, significantly reducing the number of tokens to be handled in all subsequent layers. The experiments demonstrate that having TokenLearner saves memory and computation by half or more without damaging classification performance, and because of its ability to adapt to inputs, it even increases the accuracy.

The TokenLearner
We implement TokenLearner using a straightforward spatial attention approach. In order to generate each learned token, we compute a spatial attention map highlighting regions-of-importance (using convolutional layers or MLPs). Such a spatial attention map is then applied to the input to weight each region differently (and discard unnecessary regions), and the result is spatially pooled to generate the final learned tokens. This is repeated multiple times in parallel, resulting in a few (~10) tokens out of the original input. This can also be viewed as performing a soft-selection of the pixels based on the weight values, followed by global average pooling. Note that the functions to compute the attention maps are governed by different sets of learnable parameters, and are trained in an end-to-end fashion. This allows the attention functions to be optimized in capturing different spatial information in the input. The figure below illustrates the process.

The TokenLearner module learns to generate a spatial attention map for each output token, and uses it to abstract the input to tokenize. In practice, multiple spatial attention functions are learned, are applied to the input, and generate different token vectors in parallel.

As a result, instead of processing fixed, uniformly tokenized inputs, TokenLearner enables models to process a smaller number of tokens that are relevant to the specific recognition task. That is, (1) we enable adaptive tokenization so that the tokens can be dynamically selected conditioned on the input, and (2) this effectively reduces the total number of tokens, greatly reducing the computation performed by the network. These dynamically and adaptively generated tokens can be used in standard transformer architectures such as ViT for images and ViViT for videos.

Where to Place TokenLearner
After building the TokenLearner module, we had to determine where to place it. We first tried placing it at different locations within the standard ViT architecture with 224x224 images. The number of tokens TokenLearner generated was 8 and 16, much less than 196 or 576 tokens the standard ViTs use. The below figure shows ImageNet few-shot classification accuracies and FLOPS of the models with TokenLearner inserted at various relative locations within ViT B/16, which is the base model with 12 attention layers operating on 16x16 patch tokens.

Top: ImageNet 5-shot transfer accuracy with JFT 300M pre-training, with respect to the relative TokenLearner locations within ViT B/16. Location 0 means TokenLearner is placed before any Transformer layer. Base is the original ViT B/16. Bottom: Computation, measured in terms of billions of floating point operations (GFLOPS), per relative TokenLearner location.

We found that inserting TokenLearner after the initial quarter of the network (at 1/4) achieves almost identical accuracies as the baseline, while reducing the computation to less than a third of the baseline. In addition, placing TokenLearner at the later layer (after 3/4 of the network) achieves even better performance compared to not using TokenLearner while performing faster, thanks to its adaptiveness. Due to the large difference between the number of tokens before and after TokenLearner (e.g., 196 before and 8 after), the relative computation of the transformers after the TokenLearner module becomes almost negligible.

Comparing Against ViTs
We compared the standard ViT models with TokenLearner against those without it while following the same setting on ImageNet few-shot transfer. TokenLearner was placed in the middle of each ViT model at various locations such as at 1/2 and at 3/4. The below figure shows the performance/computation trade-off of the models with and without TokenLearner.

Performance of various versions of ViT models with and without TokenLearner, on ImageNet classification. The models were pre-trained with JFT 300M. The closer a model is to the top-left of each graph the better, meaning that it runs faster and performs better. Observe how TokenLearner models perform better than ViT in terms of both accuracy and computation.

We also inserted TokenLearner within larger ViT models, and compared them against the giant ViT G/14 model. Here, we applied TokenLearner to ViT L/10 and L/8, which are the ViT models with 24 attention layers taking 10x10 (or 8x8) patches as initial tokens. The below figure shows that despite using many fewer parameters and less computation, TokenLearner performs comparably to the giant G/14 model with 48 layers.

Left: Classification accuracy of large-scale TokenLearner models compared to ViT G/14 on ImageNet datasets. Right: Comparison of the number of parameters and FLOPS.

High-Performing Video Models
Video understanding is one of the key challenges in computer vision, so we evaluated TokenLearner on multiple video classification datasets. This was done by adding TokenLearner into Video Vision Transformers (ViViT), which can be thought of as a spatio-temporal version of ViT. TokenLearner learned 8 (or 16) tokens per timestep.

When combined with ViViT, TokenLearner obtains state-of-the-art (SOTA) performance on multiple popular video benchmarks, including Kinetics-400, Kinetics-600, Charades, and AViD, outperforming the previous Transformer models on Kinetics-400 and Kinetics-600 as well as previous CNN models on Charades and AViD.

Models with TokenLearner outperform state-of-the-art on popular video benchmarks (captured from Nov. 2021). Left: popular video classification tasks. Right: comparison to ViViT models.
Visualization of the spatial attention maps in TokenLearner, over time. As the person is moving in the scene, TokenLearner pays attention to different spatial locations to tokenize.

Conclusion
While Vision Transformers serve as powerful models for computer vision, a large number of tokens and their associated computation amount have been a bottleneck for their application to larger images and longer videos. In this project, we illustrate that retaining such a large number of tokens and fully processing them over the entire set of layers is not necessary. Further, we demonstrate that by learning a module that extracts tokens adaptively based on the input image allows attaining even better performance while saving compute. The proposed TokenLearner was particularly effective in video representation learning tasks, which we confirmed with multiple public datasets. A preprint of our work as well as code are publicly available.

Acknowledgement
We thank our co-authors: AJ Piergiovanni, Mostafa Dehghani, and Anelia Angelova. We also thank the Robotics at Google team members for the motivating discussions.

Source: Google AI Blog


Year in Search: New Zealand’s Top Trending Searches for 2021 Revealed

Search can help you find a world of information – and the way people use Search can be a window into the world. 


Here’s a glimpse of the trending searches of 2021, a year we looked for ways to tie a tie, make self-raising flour and solve a Rubik’s Cube. The highs and lows of the year had us keeping updated on locations of interest, looking into crypto and stock prices and taking our measure on the latest standings of the Olympics medal table. Collectively we mourned losses, marvelled at the Met and spent a few afternoons getting the latest announcements from the Ministry of Health.


Here’s a look at some of the themes from 2021:


Comeback Queens and Kings.

2021 was a year of reunions, redemptions and triumphant releases of the musical kind. We were treated to a 90s throwback with our favourite Friends reuniting on the Central Perk couch. Tiger Woods defied odds to get back on the green and swinging; Adele graced the airwaves to teach us all about 30, and the world celebrated when Cleo Smith was returned to her family safe and sound. We searched for banana bread, scones and carrot cake with almost as much vigour as 2020 - though this year we also had guacamole and playdough on the menu.


Crises, COVID-19 and remember that blocked Canal?

Natural disasters captured our attention this year - both on our shores and farther afield. With earthquakes and tsunamis putting us on high alert, we were also shaken by news of the Kermadec Islands. Understandably we continued to seek more and more information about the pandemic, in a year where the vaccine, new variants and changing restrictions kept us on our toes. We sought answers about the crisis in Afghanistan. And remember that supply chain snafu in the Suez Canal?


Shocking absolutely no-one: Kiwis love sport!

Despite a global pandemic, Kiwis were spoiled with a number of suspenseful, powerful and history-making sporting events this year. As always we searched for games of cricket and all their stats, whether it’s us, India, Australia or Pakistan on the pitch - we’re not partial! We spent days on the water with the America’s Cup in Tāmaki Makaurau’s harbour, Lisa Carrington and the other heroic paddlers in Tokyo, and of course Sophie Pascoe’s medal winning triumph in the Paralympics. Once the Olympics medal table was in the rear view mirror, we celebrated Emma Radacanu’s title win. Because not only did our homegrown athletic heroes make us proud, but we had the NBA, Australian Open, NRL and Euros in the line up!


Understanding the oddities of our world.

Sometimes the truth is stranger than fiction. While the world battled COVID-19, we saw our own surge of RSV through the tail end of winter. We wondered when the next Blood Moon would be, and although it comes every year, we struggled to remember when daylight savings ends or even when to treat our loved ones on Valentine’s, Mother’s or Father’s Days. When seeking a little escapism from our daily lives, our searches show we found entertainment in Squid Game, Sweet Tooth and Bridgerton all bringing new perspectives on reality to our small screens. While we’re discussing fact or fiction…what really is going on with Pete Davidson and Kim Kardashian?


Every day, millions of people come to Google to ask questions. Check out the top trending search lists here:


Overall

  • COVID-19 NZ

  • NBA

  • Stuff NZ

  • Australia vs India

  • NRL

  • Locations of interest

  • Olympic medal table

  • Cricinfo

  • My covid record

  • Australian Open


Kiwis

  • Lisa Carrington

  • Lydia Ko

  • Judith Collins

  • Brian Tamaki

  • Chris Cairns

  • Lorde

  • Sophie Pascoe

  • Joseph Parker

  • Nicola Willis

  • Valerie Adams


Global Figures

  • Alec Baldwin

  • Christopher Reeve

  • Cleo Smith

  • Travis Scott

  • Kyle Rittenhouse

  • Pete Davidson

  • Emma Raducanu

  • Adele

  • Tiger Woods

  • Conor McGregor


News Events (non COVID-19)

  • Tsunami warning NZ

  • Kermadec Islands

  • RSV

  • Earthquakes today

  • Metropolitan Museum of Art

  • Cleo Smith

  • Afghanistan

  • Kyle Rittenhouse

  • Suez Canal

  • Blood moon


When

  • When is fathers day nz

  • When is the next covid announcement nz

  • When does daylight saving end

  • When is valentines day

  • When is the next blood moon

  • When does the olympics finish

  • When is auckland going to level 3

  • When does lockdown end nz

  • When is the next america's cup race

  • When is mothers day nz


Recipes - Sweet

  • Apple crumble recipe

  • Carrot cake recipe

  • Scones recipe

  • Cinnamon rolls recipe

  • Pancakes recipe

  • Banana cake recipe

  • Banana bread recipe

  • chocolate brownie recipe

  • Cheesecake recipe

  • Afghan recipe


Recipes - Savoury

  • Guacamole recipe

  • Pumpkin soup recipe

  • Pizza dough recipe

  • Bread recipe

  • Carbonara recipe

  • Naan bread recipe

  • Focaccia recipe

  • Playdough recipe nz

  • Cottage pie recipe

  • Hash brown recipe


Loss

  • Sean Wainui

  • Prince Philip

  • Gabby Petito

  • DMX

  • Sean Lock

  • Olivia Podmore

  • Brian Laundrie

  • Sarah Everard

  • Helen McCrory

  • Charlie Watts


COVID-19 Related

  • COVID-19 NZ

  • Locations of interest

  • My covid record

  • Covid cases today NZ

  • Ministry of Health

  • My vaccine pass

  • Book my vaccine

  • My health account

  • Traffic light system NZ

  • Covid vaccine


Sports

  • NBA

  • Australia vs India

  • NRL

  • Olympic medal table

  • Cricinfo

  • Australian Open

  • Pakistan vs New Zealand

  • Euros

  • America's Cup

  • India vs England


TV Shows

  • Squid Game

  • Bridgerton

  • Sweet Tooth

  • Firefly Lane

  • The Serpent

  • Wandavision

  • Clickbait

  • Friends Reunion

  • Maid

  • Ginny and Georgia


Post content

Set tasks to repeat in Google Calendar

Quick launch summary

You can now set tasks to repeat in Google Calendar and customize the recurrence schedules, similar to other entry types in Calendar. This means you can:
  • Create tasks with recurrence rules Edit the recurrence rule of an existing task Set an "end condition" for a recurrence rule
Change the recurrence of a task in Calendar




We hope this change helps you get more things done in Google Workspace.

Getting started

  • Admins: There is no admin control for this feature.
  • End users: These options will be available when creating a new task, or editing an existing task in Calendar. Visit the Help Center to learn more about tasks in Calendar.

Rollout pace

Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers

Resources

How photos can curb illegal deforestation in the Amazon

As of 2020, Brazil continues to lead the world in primary forest loss with an increase of 25% year over year. In the Amazon, the clear-cut deforestation rate is at its highest in over 10 years. Instituto Socioambiental (ISA) is a Brazilian nonprofit founded in 1994 to promote solutions to this crisis and other social and environmental issues. With a focus on the defense of the environment, cultural heritage, and human rights, ISA promotes solutions for indigenous peoples and other traditional communities in Brazil.

Watch this short documentary about their impact, how they use drone footage and Google Earth to prevent deforestation, and learn more about the role of indigenous communities in protecting local forests and biodiversity.

How Google and Gannett uncover new stories from archival photos

For golfer Tom Watson, seeing never-before published images of himself during the 1981 Masters brought back vivid memories of the important moments that led to his victory that year. For Gannett, being able to publish the video interview with Watson was part of a three-way editorial partnership between USA TODAY, Golfweek and the Augusta Chronicle, made possible with many never-before seen images that were curated from the Chronicle’s 30+ years of Masters photo archives.

All three titles are part of Gannett, which partnered with the Google News Initiative (GNI) to comb through their network archives for valuable but untapped visual assets to digitize and make available for present use through Iron Mountain Entertainment Services (IMES). Gannett mined the Masters images from the archives of the Augusta Chronicle, which were part of 40,000+ images IMES digitized from several properties across Gannett, including The Chronicle, Detroit Free Press, the Tennessean and USA TODAY. Gannett and GNI teamed up with IMES to securely transport the assets from multiple Gannett photo archives to IMES’s climate-controlled facility in Boyers, Pennsylvania. Once there, the IMES team digitized them using high-resolution image capture technology.

This project further extends a multi-dimensional partnership that has been in place for over a decade between Google and Gannett. One of the primary goals was to create new editorial content from recently digitized images unlocked from analog libraries, much of which has never been seen before. Gannett used this rediscovered editorial content to drive additional audience and subscribers to various publications. The material will also allow for expanded revenue through traffic-based advertising, sponsorship or syndication channels such as Gannett’s in-house image licensing team at Imagn.com.

One of our challenges, and one likely shared by many legacy publishers, was finding materials from the archives that were in good shape and had supporting research materials. We had to carefully select visual assets that survived mergers and physical building moves, while rejecting materials that were not preserved over time. Doing this due diligence before embarking on the digitization process is important to ensure efforts are being put towards material that will have value and organization.

The variety of archival material that was digitized in this project allowed Gannett to experiment with several different content strategies at various properties. As a publisher, Gannett is pushing towards a subscription-based model for all of its media outlets, with aggressive growth goals. The Masters archival material offered us the opportunity to examine readership trends in content at different properties, with some being subscription-based and others being free to the public. This digitized material also allows Gannett to respond quickly to news where it is relevant to current-day events. An example of where we used these archival images was the recent death of groundbreaking Black golfer Lee Elder, using images through a gallery in Elder’s obituary.

This is a black and white photo from 1975 of Lee Elder swinging a golf club at a major competition.

Lee Elder at the Augusta National Golf Course during the 1975 Masters. Elder was the first Black player to compete at the Masters. (File Photo -The Augusta Chronicle via USA TODAY NETWORK)

Early in their transition to a subscription model, the Detroit Free Press used a similar strategy of mixing subscriber-only content with some that was open to the broader public. They drew both significant traffic and drove consumer subscription conversions using this archival material. A story looking back at the details of Pope John Paul II’s visit to Detroit with some of the archival photos was published as a subscriber-only exclusive that drove new subscriptions. The broader gallery page of archival images was open to the public and drew more 100,000 page views. A reader survey conducted on existing subscribers showed that 37% had read the piece on the Free Press’ site, and more than 50% of those felt that they would enjoy seeing more content like this in the Free Press.

Pope John Paul II and a cardinal in the Pope Mobile inside the Silverdome in Detroit, Michigan.

Caption: Pope John Paul II inside the Silverdome on September 19, 1987 in Detroit, Michigan. (Manny Crisostomo -The Detroit Free Press via USA TODAY NETWORK)

Our next steps involve identifying ongoing editorial opportunities where we can use this material. Gannett will also use artificial intelligence and machine learning technologies from Google Cloud Platform to uncover more opportunities to monetize content. The Masters material in particular offers a yearly opportunity to create “from the archives” types of content experiences where we can resurface rarely or never-before-seen images and develop new storylines. Within Gannett, this future content can be created and cross-promoted at local (Augusta), national (USA TODAY) and sports specialty (Golfweek) levels, giving this material far greater reach beyond that of any single publication.

Disrupting the Glupteba operation

Google TAG actively monitors threat actors and the evolution of their tactics and techniques. We use our research to continuously improve the safety and security of our products and share this intelligence with the community to benefit the internet as a whole.

As announced today, Google has taken action to disrupt the operations of Glupteba, a multi-component botnet targeting Windows computers. We believe this action will have a significant impact on Glupteba's operations. However, the operators of Glupteba are likely to attempt to regain control of the botnet using a backup command and control mechanism that uses data encoded on the Bitcoin blockchain.

Glupteba is known to steal user credentials and cookies, mine cryptocurrencies on infected hosts, deploy and operate proxy components targeting Windows systems and IoT devices. TAG has observed the botnet targeting victims worldwide, including the US, India, Brazil and Southeast Asia.

The Glupteba malware family is primarily distributed through pay per install (PPI) networks and via traffic purchased from traffic distribution systems (TDS). For a period of time, we observed thousands of instances of malicious Glupteba downloads per day. The following image shows a webpage mimicking a software crack download which delivers a variant of Glupteba to users instead of the promised software.

Example cracked software download site distributing Glupteba

Example cracked software download site distributing Glupteba

While analyzing Glupteba binaries, our team identified a few containing a git repository URL: “git.voltronwork.com”. This finding sparked an investigation that led us to identify, with high confidence, multiple online services offered by the individuals operating the Glupteba botnet. These services include selling access to virtual machines loaded with stolen credentials (dont[.]farm), proxy access (awmproxy), and selling credit card numbers (extracard) to be used for other malicious activities such as serving malicious ads and payment fraud on Google Ads.

Example of a cryptocurrency scam uploaded to Google Ads and by Glupteba services

Example of a cryptocurrency scam uploaded to Google Ads and by Glupteba services

This past year, TAG has been collaborating with Google’s CyberCrime Investigation Group to disrupt Glupteba activity involving Google services. We’ve terminated around 63M Google Docs observed to have distributed Glupteba, 1,183 Google Accounts, 908 Cloud Projects, and 870 Google Ads accounts associated with their distribution. Furthermore, 3.5M users were warned before downloading a malicious file through Google Safe Browsing warnings.

In the last few days, our team partnered with Internet infrastructure providers and hosting providers, including CloudFlare, to disrupt Glupteba’s operation by taking down servers and placing warning interstitial pages in front of the malicious domain names. During this time, an additional 130 Google accounts associated with this operation were terminated.

Parallel to the analysis, tracking, and technical disruption of this botnet, Google has filed a lawsuit against two individuals believed to be located in Russia for operating the Glupteba Botnet and its various criminal schemes. Google is alleging violations under the Racketeer Influenced and Corrupt Organizations Act (RICO), the Computer Fraud and Abuse Act, the Electronic Communications Privacy Act, the Lanham Act, and tortious interference of business relationships, and unjust enrichment.

While these actions may not completely stop Glupteba, TAG estimates that combined efforts will materially affect the actor’s ability to conduct future operations.

Glupteba’s C2 Backup Mechanism

The command and control (C2) communication for this botnet uses HTTPS to communicate commands and binary updates between the control servers and infected systems. To add resilience to their infrastructure, the operators have also implemented a backup mechanism using the Bitcoin blockchain. In the event that the main C2 servers do not respond, the infected systems can retrieve backup domains encrypted in the latest transaction from the following bitcoin wallet addresses:

  • '1CgPCp3E9399ZFodMnTSSvaf5TpGiym2N1' [1]
  • '15y7dskU5TqNHXRtu5wzBpXdY5mT4RZNC6’ [2]
  • '1CUhaTe3AiP9Tdr4B6wedoe9vNsymLiD97' [3]

The following 32 byte AES keys for decryption are hard coded in the binaries:

  • 'd8727a0e9da3e98b2e4e14ce5a6cf33ef26c6231562a3393ca465629d66503cf'
  • ‘1bd83f6ed9bb578502bfbb70dd150d286716e38f7eb293152a554460e9223536’

The blockchain transaction’s OP_RETURN data can be decrypted using AES-256 GCM to provide a backup command and control domain name. The first 12 bytes of the OP_RETURN contains the IV, the last 16 bytes the GCM tag, while the middle section is the AES-256 GCM encrypted domain. Full details of Glupteba’s network protocol can be found in this report from 2020, the following Python script illustrates how one can decrypt an encrypted domain name:

Python script

IOCs

Recent domains used for command and control:

  • nisdably[.]com
  • runmodes[.]com
  • yturu[.]com
  • retoti[.]com
  • trumops[.]com
  • evocterm[.]com
  • iceanedy[.]com
  • ninhaine[.]com
  • anuanage[.]info

Recent sha256 hashes of malware samples:

  • df84d3e83b4105f9178e518ca69e1a2ec3116d3223003857d892b8a6f64b05ba
  • eae4968682064af4ae6caa7fff78954755537a348dce77998e52434ccf9258a2
  • a2fd759ee5c470da57d8348985dc34348ccaff3a8b1f5fa4a87e549970eeb406
  • d8a54d4b9035c95b8178d25df0c8012cf0eedc118089001ac21b8803bb8311f4
  • c3f257224049584bd80a37c5c22994e2f6facace7f7fb5c848a86be03b578ee8
  • 8632d2ac6e01b6e47f8168b8774a2c9b5fafaa2470d4e780f46b20422bc13047
  • 03d2771d83c50cc5cdcbf530f81cffc918b71111b1492ccfdcefb355fb62e025
  • e673ce1112ee159960f1b7fed124c108b218d6e5aacbcb76f93d29d61bd820ed
  • 8ef882a44344497ef5b784965b36272a27f8eabbcbcea90274518870b13007a0
  • 79616f9be5b583cefc8a48142f11ae8caf737be07306e196a83bb0c3537ccb3e
  • db84d13d7dbba245736c9a74fc41a64e6bd66a16c1b44055bd0447d2ae30b614

New action to combat cyber crime

Today, we took action to disrupt Glupteba, a sophisticated botnet which targets Windows machines and protects itself using blockchain technology. Botnets are a real threat to Internet users, and require the efforts of industry and law enforcement to deter them. As part of our ongoing work to protect people who use Google services via Windows and other IoT devices, our Threat Analysis Group took steps to detect and track Glupteba’s malicious activity over time. Our research and understanding of this botnet’s operations puts us in a unique position to disrupt it and safeguard Internet users around the world.

We’re doing this in two ways. First, we are coordinating with industry partners to take technical action.

And second, we are using our resources to launch litigation — the first lawsuit against a blockchain enabled botnet — which we think will set a precedent, create legal and liability risks for the botnet operators, and help deter future activity.

About the Glupteba botnet

A botnet is a network of devices connected to the internet that have been infected with a type of malware that places them under the control of bad actors. They can then use the infected devices for malicious purposes, such as to steal your sensitive information or commit fraud through your home network.

After a thorough investigation, we determined that the Glupteba botnet currently involves approximately one million compromised Windows devices worldwide, and at times, grows at a rate of thousands of new devices per day. Glupteba is notorious for stealing users’ credentials and data, mining cryptocurrencies on infected hosts, and setting up proxies to funnel other people’s internet traffic through infected machines and routers.

Technical action

We coordinated with industry partners to take technical action. We have now disrupted key command and control infrastructure so those operating Glupteba should no longer have control of their botnet — for now.

However, due to Glupteba’s sophisticated architecture and the recent actions that its organizers have taken to maintain the botnet, scale its operations, and conduct widespread criminal activity, we have also decided to take legal action against its operators, which we believe will make it harder for them to take advantage of unsuspecting users. .

Legal Strategy & Disruption

Our litigation was filed against the operators of the botnet, who we believe are based in Russia. We filed the action in the Southern District of New York for computer fraud and abuse, trademark infringement, and other claims. We also filed a temporary restraining order to bolster our technical disruption effort. If successful, this action will create real legal liability for the operators.

Making the Internet Safer

Unfortunately, Glupteba’s use of blockchain technology as a resiliency mechanism is notable here and is becoming a more common practice among cyber crime organizations. The decentralized nature of blockchain allows the botnet to recover more quickly from disruptions, making them that much harder to shutdown. We are working closely with industry and government as we combat this type of behavior, so that even if Glupteba returns, the internet will be better protected against it.

Our goal is to bring awareness to these issues to protect our users and the broader ecosystem, and to prevent future malicious activity.

We don’t just plug security holes, we work to eliminate entire classes of threats for consumers and businesses whose work depends on the Internet. We have teams of analysts and security experts who are dedicated to identifying and stopping issues like DDoS, phishing campaigns, zero-day vulnerabilities, and hacking against Google, our products, and our users.

Taking proactive actions like this are critical to our security. We understand and recognize the threats the Internet faces, and we are doing our part to address them.

Chrome for Android Update

Hi, everyone! We've just released Chrome 96 (96.0.4664.92) for Android: it'll become available on Google Play over the next few days.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

Lock audio and video during a Google Meet meeting from iOS devices

Quick launch summary

Google Meet hosts and co-hosts can now lock all participants’ audio and video from iOS devices, which locks all participants’ audio so they’re muted or prevents participants’ from using their camera respectively. These settings can help prevent disruptions, keeping your meetings on track and productive.

Previously  it was only possible to use these locks when using Google Meet on a computer. We anticipate this feature to be available for Android in early 2022 — we will provide an update on the Workspace Updates Blog once available.

Additional details 

Please note:

The Audio Lock & Video Lock setting applies to all devices regardless of whether it’s set on a computer or an iOS device.

When Audio Lock or Video Lock is enabled, mobile participants may be removed from the meeting if their device doesn’t have:

  • The most updated version of the Meet or Gmail app
  • Android OS version M or newer 
  • iOS version 12 or newer

Once Audio or Video Lock is disabled, removed participants will be able to rejoin.

Getting started

  • Admins: There is no admin control for this feature.
  • End users: Visit the Help Center to learn more about locking audio or video during a Google Meet meeting.


Rollout pace


Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers


Resources