Chrome for Android Update

 Hi, everyone! We've just released Chrome 121 (121.0.6167.101) for Android: it'll become available on Google Play over the next few days.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.


Android releases contain the same security fixes as their corresponding Desktop  (Windows: 121.0.6167.85/.86; Mac & Linux: 121.0.6167.85) unless otherwise noted.


Krishna Govind
Google Chrome

Exphormer: Scaling transformers for graph-structured data

Graphs, in which objects and their relations are represented as nodes (or vertices) and edges (or links) between pairs of nodes, are ubiquitous in computing and machine learning (ML). For example, social networks, road networks, and molecular structure and interactions are all domains in which underlying datasets have a natural graph structure. ML can be used to learn the properties of nodes, edges, or entire graphs.

A common approach to learning on graphs are graph neural networks (GNNs), which operate on graph data by applying an optimizable transformation on node, edge, and global attributes. The most typical class of GNNs operates via a message-passing framework, whereby each layer aggregates the representation of a node with those of its immediate neighbors.

Recently, graph transformer models have emerged as a popular alternative to message-passing GNNs. These models build on the success of Transformer architectures in natural language processing (NLP), adapting them to graph-structured data. The attention mechanism in graph transformers can be modeled by an interaction graph, in which edges represent pairs of nodes that attend to each other. Unlike message passing architectures, graph transformers have an interaction graph that is separate from the input graph. The typical interaction graph is a complete graph, which signifies a full attention mechanism that models direct interactions between all pairs of nodes. However, this creates quadratic computational and memory bottlenecks that limit the applicability of graph transformers to datasets on small graphs with at most a few thousand nodes. Making graph transformers scalable has been considered one of the most important research directions in the field (see the first open problem here).

A natural remedy is to use a sparse interaction graph with fewer edges. Many sparse and efficient transformers have been proposed to eliminate the quadratic bottleneck for sequences, however, they do not generally extend to graphs in a principled manner.

In “Exphormer: Sparse Transformers for Graphs”, presented at ICML 2023, we address the scalability challenge by introducing a sparse attention framework for transformers that is designed specifically for graph data. The Exphormer framework makes use of expander graphs, a powerful tool from spectral graph theory, and is able to achieve strong empirical results on a wide variety of datasets. Our implementation of Exphormer is now available on GitHub.


Expander graphs

A key idea at the heart of Exphormer is the use of expander graphs, which are sparse yet well-connected graphs that have some useful properties — 1) the matrix representation of the graphs have similar linear-algebraic properties as a complete graph, and 2) they exhibit rapid mixing of random walks, i.e., a small number of steps in a random walk from any starting node is enough to ensure convergence to a “stable” distribution on the nodes of the graph. Expanders have found applications to diverse areas, such as algorithms, pseudorandomness, complexity theory, and error-correcting codes.

A common class of expander graphs are d-regular expanders, in which there are d edges from every node (i.e., every node has degree d). The quality of an expander graph is measured by its spectral gap, an algebraic property of its adjacency matrix (a matrix representation of the graph in which rows and columns are indexed by nodes and entries indicate whether pairs of nodes are connected by an edge). Those that maximize the spectral gap are known as Ramanujan graphs — they achieve a gap of d - 2*√(d-1), which is essentially the best possible among d-regular graphs. A number of deterministic and randomized constructions of Ramanujan graphs have been proposed over the years for various values of d. We use a randomized expander construction of Friedman, which produces near-Ramanujan graphs.

Expander graphs are at the heart of Exphormer. A good expander is sparse yet exhibits rapid mixing of random walks, making its global connectivity suitable for an interaction graph in a graph transformer model.

Exphormer replaces the dense, fully-connected interaction graph of a standard Transformer with edges of a sparse d-regular expander graph. Intuitively, the spectral approximation and mixing properties of an expander graph allow distant nodes to communicate with each other after one stacks multiple attention layers in a graph transformer architecture, even though the nodes may not attend to each other directly. Furthermore, by ensuring that d is constant (independent of the size of the number of nodes), we obtain a linear number of edges in the resulting interaction graph.


Exphormer: Constructing a sparse interaction graph

Exphormer combines expander edges with the input graph and virtual nodes. More specifically, the sparse attention mechanism of Exphormer builds an interaction graph consisting of three types of edges:

  • Edges from the input graph (local attention)
  • Edges from a constant-degree expander graph (expander attention)
  • Edges from every node to a small set of virtual nodes (global attention)
Exphormer builds an interaction graph by combining three types of edges. The resulting graph has good connectivity properties and retains the inductive bias of the input dataset graph while still remaining sparse.

Each component serves a specific purpose: the edges from the input graph retain the inductive bias from the input graph structure (which typically gets lost in a fully-connected attention module). Meanwhile, expander edges allow good global connectivity and random walk mixing properties (which spectrally approximate the complete graph with far fewer edges). Finally, virtual nodes serve as global “memory sinks” that can directly communicate with every node. While this results in additional edges from each virtual node equal to the number of nodes in the input graph, the resulting graph is still sparse. The degree of the expander graph and the number of virtual nodes are hyperparameters to tune for improving the quality metrics.

Furthermore, since we use an expander graph of constant degree and a small constant number of virtual nodes for the global attention, the resulting sparse attention mechanism is linear in the size of the original input graph, i.e., it models a number of direct interactions on the order of the total number of nodes and edges.

We additionally show that Exphormer is as expressive as the dense transformer and obeys universal approximation properties. In particular, when the sparse attention graph of Exphormer is augmented with self loops (edges connecting a node to itself), it can universally approximate continuous functions [1, 2].


Relation to sparse Transformers for sequences

It is interesting to compare Exphormer to sparse attention methods for sequences. Perhaps the architecture most conceptually similar to our approach is BigBird, which builds an interaction graph by combining different components. BigBird also uses virtual nodes, but, unlike Exphormer, it uses window attention and random attention from an Erdős-Rényi random graph model for the remaining components.

Window attention in BigBird looks at the tokens surrounding a token in a sequence — the local neighborhood attention in Exphormer can be viewed as a generalization of window attention to graphs.

The Erdős-Rényi graph on n nodes, G(n, p), which connects every pair of nodes independently with probability p, also functions as an expander graph for suitably high p. However, a superlinear number of edges (Ω(n log n)) is needed to ensure that an Erdős-Rényi graph is connected, let alone a good expander. On the other hand, the expanders used in Exphormer have only a linear number of edges.


Experimental results

Earlier works have shown the use of full graph Transformer-based models on datasets with graphs of size up to 5,000 nodes. To evaluate the performance of Exphormer, we build upon the celebrated GraphGPS framework [3], which combines both message passing and graph transformers and achieves state-of-the-art performance on a number of datasets. We show that replacing dense attention with Exphormer for the graph attention component in the GraphGPS framework allows one to achieve models with comparable or better performance, often with fewer trainable parameters.

Furthermore, Exphormer notably allows graph transformer architectures to scale well beyond the usual graph size limits mentioned above. Exphormer can scale up to datasets of 10,000+ node graphs, such as the Coauthor dataset, and even beyond to larger graphs such as the well-known ogbn-arxiv dataset, a citation network, which consists of 170K nodes and 1.1 million edges.

Results comparing Exphormer to standard GraphGPS on the five Long Range Graph Benchmark datasets. We note that Exphormer achieved state-of-the-art results on four of the five datasets (PascalVOC-SP, COCO-SP, Peptides-Struct, PCQM-Contact) at the time of the paper’s publication.

Finally, we observe that Exphormer, which creates an overlay graph of small diameter via expanders, exhibits the ability to effectively learn long-range dependencies. The Long Range Graph Benchmark is a suite of five graph learning datasets designed to measure the ability of models to capture long-range interactions. Results show that Exphormer-based models outperform standard GraphGPS models (which were previously state-of-the-art on four out of five datasets at the time of publication).


Conclusion

Graph transformers have emerged as an important architecture for ML that adapts the highly successful sequence-based transformers used in NLP to graph-structured data. Scalability has, however, proven to be a major challenge in enabling the use of graph transformers on datasets with large graphs. In this post, we have presented Exphormer, a sparse attention framework that uses expander graphs to improve scalability of graph transformers. Exphormer is shown to have important theoretical properties and exhibit strong empirical performance, particularly on datasets where it is crucial to learn long range dependencies. For more information, we point the reader to a short presentation video from ICML 2023.


Acknowledgements

We thank our research collaborators Hamed Shirzad and Danica J. Sutherland from The University of British Columbia as well as Ali Kemal Sinop from Google Research. Special thanks to Tom Small for creating the animation used in this post.

Source: Google AI Blog


Closed caption support in Google Meet expands to an additional thirty-one languages

What’s changing

We’ve expanded support for closed captioning to include the following additional languages:

-Afrikaans

-Albanian 

-Amharic 

-Armenian

-Australian English

-Basque

-Burmese

-Catalan

-English (India)

-English (Philippines)

-Estonian

-Farsi

-Filipino

-Galician 

-Georgian

-Hungarian

-Javanese

-Latvian

-Macedonian

-Mongolian

-Nepali

-Norwegian

-Sinhala

-Slovak

-Slovenian

-Sundanese 

-Tamil (India)

-Telugu (India)

-Urdu

-Uzbek

- Zulu



You’ll notice that the newly supported languages are denoted with a “beta” tag as we continue to optimize performance.

Getting started


Rollout pace

  • This update is available now for all users.

Availability

  • Available to all Google Workspace customers and users with personal Google Accounts 

Resources



Closed caption support in Google Meet expands to an additional thirty-one languages

What’s changing

We’ve expanded support for closed captioning to include the following additional languages:

-Afrikaans

-Albanian 

-Amharic 

-Armenian

-Australian English

-Basque

-Burmese

-Catalan

-English (India)

-English (Philippines)

-Estonian

-Farsi

-Filipino

-Galician 

-Georgian

-Hungarian

-Javanese

-Latvian

-Macedonian

-Mongolian

-Nepali

-Norwegian

-Sinhala

-Slovak

-Slovenian

-Sundanese 

-Tamil (India)

-Telugu (India)

-Urdu

-Uzbek

- Zulu



You’ll notice that the newly supported languages are denoted with a “beta” tag as we continue to optimize performance.

Getting started


Rollout pace

  • This update is available now for all users.

Availability

  • Available to all Google Workspace customers and users with personal Google Accounts 

Resources



Closed caption support in Google Meet expands to an additional thirty-one languages

What’s changing

We’ve expanded support for closed captioning to include the following additional languages:

-Afrikaans

-Albanian 

-Amharic 

-Armenian

-Australian English

-Basque

-Burmese

-Catalan

-English (India)

-English (Philippines)

-Estonian

-Farsi

-Filipino

-Galician 

-Georgian

-Hungarian

-Javanese

-Latvian

-Macedonian

-Mongolian

-Nepali

-Norwegian

-Sinhala

-Slovak

-Slovenian

-Sundanese 

-Tamil (India)

-Telugu (India)

-Urdu

-Uzbek

- Zulu



You’ll notice that the newly supported languages are denoted with a “beta” tag as we continue to optimize performance.

Getting started


Rollout pace

  • This update is available now for all users.

Availability

  • Available to all Google Workspace customers and users with personal Google Accounts 

Resources



Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 121 (121.0.6167.101) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Extended Stable Channel Update for Desktop

 The Extended Stable channel has been updated to 120.0.6099.268 for Windows and Mac which will roll out over the coming days/weeks.

A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Srinivas Sista
Google Chrome

CES 2024: Wi-Fi 7 and the future of connectivity

The dust is settling from the recent Consumer Electronics Show (CES) in Las Vegas, but one thing is clear: the dawn of Wi-Fi 7 is here. The Wi-Fi Alliance officially certified the Wi-Fi 7 standards on January 8th, marking a pivotal moment in the journey toward bringing this faster, more reliable wireless technology to market. Most of our lives depend on Wi-Fi every day — the technology allows our home devices to connect with the internet, giving us access to information, entertainment, education and so much more. That is why the advancement in Wi-Fi technology is important, for it impacts our connectivity experience.

Thumbnail

This news of the next generation of Wi-Fi definitely made waves at CES 2024. The event buzzed with announcements of lots of new devices that support Wi-Fi 7 — from smartphones, to smart home devices, to more innovative devices — the industry has begun gearing up to embrace the power of Wi-Fi 7. Google Fiber is no exception — we’re excited about the Wi-Fi Alliance’s certification of Wi-Fi 7 because it opens the door to even more multi-gig speeds and reduced latency over Wi-Fi networks.


The biggest innovation with Wi-Fi 7 is Multi-Link Operation (MLO), which allows packets to be sent over multiple frequencies simultaneously. Prior generations of Wi-Fi only used one frequency band at a time.This means that one device which supports Wi-Fi 7 can talk to an access point over multiple radios and frequency bands at the same time.

Wi-Fi 7 devices can select from either a 2.4, 5, or 6 GHz band and are able to choose the Wi-Fi band that offers the most efficient and reliable path to the router. The result is lower latency and improved reliability, resulting in a better experience for internet users.

We know that great wireless internet is key to our customers’ in-home experience, so we spend a lot of time working on making it better (even when it’s already good). In 2023, we deployed Wi-Fi 6E routers with tri-band connectivity, built to handle more devices with fewer slowdowns.

Our focus and commitment to delivering speed to the home and in the home is exactly why we’re including a Wi-Fi 7 router for GFiber Labs 20 Gig customers. Wi-Fi 7 is a longer-term solution, if you’re thinking about purchasing a new smartphone, TV, tablet, computer, or other devices in 2024, you may want to consider whether they are Wi-Fi 7 compatible. You can have the latest high-powered router but the compatibility of your Wi-Fi devices also impacts your online experience. Our device guide can help determine Wi-Fi compatibility for your devices.

CES exhibitors unveiled cool new devices and products that support Wi-Fi 7 and are coming soon. Dell introduced two new laptops — the Dell XPS 16 laptop and the Dell Alienware M15 R2 gaming laptop, which are expected to be released in Q1 '24. Samsung announced its Galaxy S24 Ultra smartphone, and Acer announced a new gaming router; the Wi-Fi 7 Predator Connect X7 5G CPE, which offers dual connectivity (an ethernet network and 5G service).

Wi-Fi Alliance’s certification of Wi-Fi 7 ushers in a host of new possibilities, and GFiber is committed to making sure that our customers can harness the speed of their internet. Expect more from us soon to help make your in-home internet even faster.

Posted by Ishan Patel, Product Manager



Want more content like this in your inbox? Subscribe to get the GFiber blog in your email.