Stable Channel Update for Chrome OS

 The Stable channel is being updated to 87.0.4280.88 (Platform version: 13505.63.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Cindy Bayless

Google Chrome OS

Dev Channel Update for Desktop

 The Dev channel has been updated to 89.0.4343.0 for Windows, Mac and Linux platforms.

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Google Chrome 

Prudhvikumar Bommana

New interface for Google Vault

Quick launch summary 

We’ve designed a new interface for Google Vault. The new interface makes it easier to navigate, and includes new productivity features for faster task completion. To use the new interface, visit vault.google.com and sign in with your Google Workspace admin account. 

The new interface includes all the core functionality from the classic interface, and there’s no impact on your existing Google Vault setup. The main interface improvements include the following: 
  • When you first sign in, you’re directed to a home page with up to three options (depending on your permissions): Retention, Matters, and Reporting. 
  • When you set up retention rules and holds, step-by-step flows with more tooltips guide you through the process. 
  • Custom retention rules, holds, and search results are listed in sortable, filterable tables. This helps you more easily understand the scope of your information governance policies and search results. 
  • When you explore search results and hold reports, you keep your context. Clicking an item opens a side panel instead of taking you to a new page. 

The classic interface is still available at ediscovery.google.com. Matters, retention rules, and audit log data will sync between the interfaces and be available in both places until ediscovery.google.com is shut down. We’ll provide more details regarding this turndown on the Workspace Updates blog at least three months in advance. 


Getting started 

  • Admins: The new interface is available at vault.google.com. Visit the Help Center to learn more about managing Google Vault for your organization. Note that users in the classic interface may see a banner inviting them to try the new interface starting on December 3, 2020. 
  • End users: No end user impact. 
When you first sign in, you’ll now see a home page with up to three options: Retention, Matters, and Reporting. 

Custom rules are now listed in a sortable, filterable table. 

You’ll see a new interface for creating and managing rules. 

Rollout pace 

  • This feature is available now for all users. 

Availability

  • Available to Google Workspace Business Plus, Enterprise Standard, and Enterprise Plus customers, G Suite Business and Enterprise for Education customers, and customers with the Vault add-on license 
  • Not available to Google Workspace Essentials, Business Starter, and Business Standard customers, as well as G Suite Basic, Education, and Nonprofit customers 

Resources 

New interface for Google Vault

Quick launch summary 

We’ve designed a new interface for Google Vault. The new interface makes it easier to navigate, and includes new productivity features for faster task completion. To use the new interface, visit vault.google.com and sign in with your Google Workspace admin account. 

The new interface includes all the core functionality from the classic interface, and there’s no impact on your existing Google Vault setup. The main interface improvements include the following: 
  • When you first sign in, you’re directed to a home page with up to three options (depending on your permissions): Retention, Matters, and Reporting. 
  • When you set up retention rules and holds, step-by-step flows with more tooltips guide you through the process. 
  • Custom retention rules, holds, and search results are listed in sortable, filterable tables. This helps you more easily understand the scope of your information governance policies and search results. 
  • When you explore search results and hold reports, you keep your context. Clicking an item opens a side panel instead of taking you to a new page. 

The classic interface is still available at ediscovery.google.com. Matters, retention rules, and audit log data will sync between the interfaces and be available in both places until ediscovery.google.com is shut down. We’ll provide more details regarding this turndown on the Workspace Updates blog at least three months in advance. 


Getting started 

  • Admins: The new interface is available at vault.google.com. Visit the Help Center to learn more about managing Google Vault for your organization. Note that users in the classic interface may see a banner inviting them to try the new interface starting on December 3, 2020. 
  • End users: No end user impact. 
When you first sign in, you’ll now see a home page with up to three options: Retention, Matters, and Reporting. 

Custom rules are now listed in a sortable, filterable table. 

You’ll see a new interface for creating and managing rules. 

Rollout pace 

  • This feature is available now for all users. 

Availability

  • Available to Google Workspace Business Plus, Enterprise Standard, and Enterprise Plus customers, G Suite Business and Enterprise for Education customers, and customers with the Vault add-on license 
  • Not available to Google Workspace Essentials, Business Starter, and Business Standard customers, as well as G Suite Basic, Education, and Nonprofit customers 

Resources 

Beta Channel Update for Desktop

The Chrome team is excited to announce the promotion of Chrome 88 to the Beta channel for Windows, Mac and Linux. Chrome 88.0.4324.27 contains our usual under-the-hood performance and stability tweaks, but there are also some cool new features to explore - please head to the Chromium blog to learn more!



A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Srinivas SistaGoogle Chrome

Organizing the world’s information: where does it all come from?

Since Google was founded more than 22 years ago, we’ve continued to pursue an ambitious mission of organizing the world’s information and making it universally accessible and useful. While we started with organizing web pages, our mission has always been much more expansive. We didn’t set out to organize the web’s information, but all the world’s information. 

Quickly, Google expanded beyond the web and began to look for new ways to understand the world and make information and knowledge accessible for more people. The internet--and the world--have changed a lot since those early days, and we’ve continued to improve Google Search to both anticipate and respond to the ever-evolving information needs that people have. 

It’s no mystery that the search results you saw back in 1998 look different than what you might find today. So we wanted to share an overview of where the information on Google comes from and, in another post, how we approach organizing an ever-expanding universe of web pages, images, videos, real-world insights and all the other forms of information out there.

Information from the open web

You’re probably familiar with web listings on Google--the iconic “blue link” results that take you to pages from across the web. These listings, along with many other features on the search results page, link out to pages on the open web that we’ve crawled and indexed, following instructions provided by the site creators themselves.

Site owners have the control to tell our web crawler (Googlebot) what pages we should crawl and index, and they even have more granular controls to indicate which portions of a page should appear as a text snippet on Google Search. Using our developer tools, site creators can choose if they want to be discovered via Google and optimize their sites to improve how they’re presented, with the aim to get more free traffic from people looking for the information and services they’re offering. 

Google Search is one of many ways people find information and websites.  Every day, we send billions of visitors to sites across the web, and the traffic we send has grown every year since Google started. This traffic goes to a wide range of websites, helping people discover new companies, blogs, and products, not just the largest, well known sites on the web. Every day, we send visitors to well over 100 million different websites. 

Common knowledge and public data sources

Creators, publishers and businesses of all sizes work to create unique content, products and services. But there is also information that falls into the category of what you might describe as common knowledge--information that wasn’t uniquely created or doesn’t “belong” to any one person, but represents a set of facts that is broadly known. Think: the birthdate of a historical figure, the height of the tallest mountain in South America, or even what day it is today. 

We help people easily find these types of facts through a variety of Google Search features like knowledge panels. The information comes from a wide range of openly licensed sources such as Wikipedia, The Encyclopedia of Life, Johns Hopkins University CSSE COVID-19 Data, and the Data Commons Project, an open knowledge database of statistical data we started in collaboration with the U.S. Census, Bureau of Labor Statistics, Eurostat, World Bank and many others.

Another type of common knowledge is the product of calculations, and this is information that Google often generates directly. So when you search for a conversion of time (“What time is it in London?”) or measurement (“How many pounds in a metric ton?”), or want to know the square root of 348, those are pieces of information that Google calculates. Fun fact: we also calculate the sunrise and sunset times for locations based on latitude and longitude!

Licenses and partnerships

When it comes to organizing information, unstructured data (words and phrases on web pages) is more challenging for our automated systems to understand. Structured databases, including public knowledge bases like Wikidata, make it a lot easier for our systems to understand, organize and present facts in helpful features and formats.

For some specialized types of data, like sports scores, information about TV shows and movies, and song lyrics, there are providers who work to organize information in a structured format and offer technical solutions (like APIs) to deliver fresh info. We license data from these companies to ensure that providers and creators (like music publishers and artists) are compensated for their work. When people come to Google looking for this information, they can access it right away.

We always work to deliver high quality information, and for topics like health or civic participation that affect people’s livelihoods, easy access to reliable, authoritative information is critically important. For these types of topics, we work with organizations like local health authorities, such as the CDC in the U.S., and nonpartisan, nonprofit organizations like Democracy Works to make authoritative information readily available on Google.

Information that people and businesses provide

There’s a wide range of information that exists in the world that isn’t currently available on the open web, so we look for ways to help people and businesses share these updates, including by providing information directly to Google. Local businesses can claim their Business Profile and share the latest with potential customers on Search, even if they don’t have a website. In fact, each month Google Search connects people with more than 120 million businesses that don’t have a website. On average, local results in Search drive more than 4 billion connections for businesses every month, including more than 2 billion visits to websites as well as connections like phone calls, directions, ordering food and making reservations.

We’re also deeply investing in new techniques to ensure that we’re reflecting the latest accurate information. This can be especially challenging as local information is constantly changing and not often accurately reflected on the web. For example, in the wake of COVID-19, we’ve used our Duplex conversational technology to call businesses, helping to update their listings by confirming details like modified store hours or whether they offer takeout and delivery. Since this work began, we’ve made over 3 million updates to businesses like pharmacies, restaurants and grocery stores that have been seen over 20 billion times in Maps and Search. 

Other businesses like airlines, retailers and manufacturers also provide Google and other sites with data about their products and inventory through direct feeds. So when you search for a flight from Bogota to Lima, or want to learn more about the specs of the hottest new headphones, Google can provide high quality information straight from the source.

We also provide ways for people to share their knowledge about places across more than 220 countries and territories. Thanks to millions of contributions submitted by users every day--from reviews and ratings to photos, answers to questions, address updates and more--people all around the world can find the latest, accurate local information on Google Search and Maps. 

Newly created information and insights from Google

Through advancements in AI and machine learning, we’ve developed innovative ways to derive new insights from the world around us, providing people with information that can not only help them in their everyday lives, but also keep them safe.

For years, people have turned to our Popular Times feature to help gauge the crowds at their favorite brunch spots or visit their local grocery store when it’s less busy. We're continually improving the accuracy and coverage of this feature, currently available for 20 million places around the world on Maps and Search. Now, this technology is serving more critical needs during COVID. With an expansion of our live busyness feature, these Google insights are helping people take crowdedness into account as they patronize businesses through the pandemic. 

We also generate new insights to aid in crisis response--from wildfire maps based on satellite data to AI-generated flood forecasting--to help people stay out of harm’s way when disaster strikes.

Organizing information and making it accessible and useful

Simply compiling a wide range of information is not enough. Core to making information accessible is organizing it in a way that people can actually use it. 

How we organize information continues to evolve, especially as new information and content formats become available. To learn more about our approach to provide you with helpful, well-organized search results pages, check out the next blog in our How Search Worksseries.

How Google organizes information to find what you’re looking for

When you come to Google and do a search, there might be billions of pages that are potential matches for your query, and millions of new pages being produced every minute. In the early days, we updated our search index once per month. Now, like other search engines Google, is constantly indexing new info to make accessible through Search.


But to make all of this information useful, it’s critical that we organize it in a way that helps people quickly find what they’re looking for. With this in mind, here’s a closer look at how we approach organizing information on Google Search.


Organizing information in rich and helpful features

Google indexes all types of information--from text and images in web pages, to real-world information, like whether a local store has a sweater you’re looking for in stock. To make this information useful to you, we organize it on the search results page in a way that makes it easy to scan and digest. When looking for jobs, you often want to see a list of specific roles. Whereas if you’re looking for a restaurant, seeing a map can help you easily find a spot nearby. 


We offer a wide range of features--from video and news carousels, to results with rich imagery, to helpful labels like star reviews--to help you navigate the available information more seamlessly. These features include links to web pages, so you can easily click to a website to find more information. In fact, we’ve grown the average number of outbound links to websites on a search results page from only 10 (“10 blue links”) to now an average of 26 links on a mobile results page. As we’ve added more rich features to Google Search, people are more likely to find what they’re looking for, and websites have more opportunity to appear on the first page of search results.


google search results page for pancake in 2012 v. 2020

When you searched for “pancake” in 2012, you mostly saw links to webpages. Now, you can easily find recipe links, videos, facts about pancakes, nutritional information, restaurants that serve pancakes, and more.

Presenting information in rich features, like an image carousel or a map, makes Google Search more helpful, both to people and to businesses. These features are designed so you can find the most relevant and useful information for your query. By improving our ability to deliver relevant results, we’ve seen that people are spending more time on the webpages they find through Search. The amount of time spent on websites following a click from Google Search has significantly grown year over year. 


Helping you explore and navigate topics

Another important element of organizing information is helping you learn more about a topic. After all, most queries don’t just have a single answer--they’re often open-ended questions like “dessert ideas.”


Our user experience teams spend a lot of time focused on how we can make it easy and intuitive to refine your search as you go. This is why we’ve introduced features like carousels, where you can easily swipe your phone screen to get more results. For instance, if you search for “meringue”, you might see a list of related topics along with related questions that other people have asked to help you on your journey.


google search results page for query meringue

How features and results are ranked

Organizing information into easy-to-use formats is just one piece of the puzzle. To make all of this information truly useful, we also must order, or “rank,” results in a way that ensures the most helpful and reliable information rises to the top.


Our ranking systems consider a number of factors--from what words appear on the page, to how fresh the content is--to determine what results are most relevant and helpful for a given query. Underpinning these systems is a deep understanding of information--from language and visual content to context like time and place--that allows us to match the intent of your query with the most relevant, highest quality results available


In cases where there’s a single answer, like “When was the first Academy Awards?,” directly providing that answer is the most helpful result, so it will appear at the top of the page. But sometimes queries can have many interpretations. Take a query like “pizza”--you might be looking for restaurants nearby, delivery options, pizza recipes, and more. Our systems aim to compose a page that is likely to have what you’re looking for, ranking results for the most likely intents at the top of the page. Ranking a pizza recipe first would certainly be relevant, but our systems have learned that people searching for “pizza” are more likely to be looking for restaurants, so we’re likely to show a map with local restaurants first. Contrast that to a query like “pancake” where we find that people are more likely looking for recipes, so recipes often rank higher, and a map with restaurants serving pancakes may appear lower on the page.
google search results pages for pizza and pancake

An important thing to remember is that ranking is dynamic. New things are always happening in the world, so the available information and the meaning of queries can change day-by-day. This summer, searches for “why is the sky orange” turned from a general question about sunsets to a specific, locally relevant query about weather conditions on the West Coast of the U.S. due to wildfires. We constantly evaluate the quality of our results to ensure that even as queries or content changes, we’re still providing helpful information. More than 10,000 search quality raters around the world help us conduct hundreds of thousands of tests every year, and it’s through this process that we know that our investments in Google Search truly benefit people.


We’ve heard people ask if we design our search ranking systems to benefit advertisers, and we want to be clear: that is absolutely not the case. We never provide special treatment to advertisers in how our search algorithms rank their websites, and nobody can pay us to do so. 


Ongoing investment in a high quality experience

As we’ve seen for many years, and as was particularly apparent in the wake of COVID, information needs can change rapidly. As the world changes, we are always looking for new ways we can make Google Search better and help people improve their lives through access to information.


Every year, we make thousands of improvements to Google Search, all of which we test to ensure they’re truly making the experience more intuitive, modern, delightful, helpful and all-around better for the billions of queries we get every day. Search will never be a solved problem, but we’re committed to continuing to innovate to make Google better for you.


Transformers for Image Recognition at Scale

While convolutional neural networks (CNNs) have been used in computer vision since the 1980s, they were not at the forefront until 2012 when AlexNet surpassed the performance of contemporary state-of-the-art image recognition methods by a large margin. Two factors helped enable this breakthrough: (i) the availability of training sets like ImageNet, and (ii) the use of commoditized GPU hardware, which provided significantly more compute for training. As such, since 2012, CNNs have become the go-to model for vision tasks.

The benefit of using CNNs was that they avoided the need for hand-designed visual features, instead learning to perform tasks directly from data “end to end”. However, while CNNs avoid hand-crafted feature-extraction, the architecture itself is designed specifically for images and can be computationally demanding. Looking forward to the next generation of scalable vision models, one might ask whether this domain-specific design is necessary, or if one could successfully leverage more domain agnostic and computationally efficient architectures to achieve state-of-the-art results.

As a first step in this direction, we present the Vision Transformer (ViT), a vision model based as closely as possible on the Transformer architecture originally designed for text-based tasks. ViT represents an input image as a sequence of image patches, similar to the sequence of word embeddings used when applying Transformers to text, and directly predicts class labels for the image. ViT demonstrates excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art CNN with four times fewer computational resources. To foster additional research in this area, we have open-sourced both the code and models.

The Vision Transformer treats an input image as a sequence of patches, akin to a series of word embeddings generated by a natural language processing (NLP) Transformer.

The Vision Transformer
The original text Transformer takes as input a sequence of words, which it then uses for classification, translation, or other NLP tasks. For ViT, we make the fewest possible modifications to the Transformer design to make it operate directly on images instead of words, and observe how much about image structure the model can learn on its own.

ViT divides an image into a grid of square patches. Each patch is flattened into a single vector by concatenating the channels of all pixels in a patch and then linearly projecting it to the desired input dimension. Because Transformers are agnostic to the structure of the input elements we add learnable position embeddings to each patch, which allow the model to learn about the structure of the images. A priori, ViT does not know about the relative location of patches in the image, or even that the image has a 2D structure — it must learn such relevant information from the training data and encode structural information in the position embeddings.

Scaling Up

We first train ViT on ImageNet, where it achieves a best score of 77.9% top-1 accuracy. While this is decent for a first attempt, it falls far short of the state of the art — the current best CNN trained on ImageNet with no extra data reaches 85.8%. Despite mitigation strategies (e.g., regularization), ViT overfits the ImageNet task due to its lack of inbuilt knowledge about images.

To investigate the impact of dataset size on model performance, we train ViT on ImageNet-21k (14M images, 21k classes) and JFT (300M images, 18k classes), and compare the results to a state-of-the-art CNN, Big Transfer (BiT), trained on the same datasets. As previously observed, ViT performs significantly worse than the CNN equivalent (BiT) when trained on ImageNet (1M images). However, on ImageNet-21k (14M images) performance is comparable, and on JFT (300M images), ViT now outperforms BiT.

Finally, we investigate the impact of the amount of computation involved in training the models. For this, we train several different ViT models and CNNs on JFT. These models span a range of model sizes and training durations. As a result, they require varying amounts of compute for training. We observe that, for a given amount of compute, ViT yields better performance than the equivalent CNNs.

Left: Performance of ViT when pre-trained on different datasets. Right: ViT yields a good performance/compute trade-off.

High-Performing Large-Scale Image Recognition
Our data suggest that (1) with sufficient training ViT can perform very well, and (2) ViT yields an excellent performance/compute trade-off at both smaller and larger compute scales. Therefore, to see if performance improvements carried over to even larger scales, we trained a 600M-parameter ViT model.

This large ViT model attains state-of-the-art performance on multiple popular benchmarks, including 88.55% top-1 accuracy on ImageNet and 99.50% on CIFAR-10. ViT also performs well on the cleaned-up version of the ImageNet evaluations set “ImageNet-Real”, attaining 90.72% top-1 accuracy. Finally, ViT works well on diverse tasks, even with few training data points. For example, on the VTAB-1k suite (19 tasks with 1,000 data points each), ViT attains 77.63%, significantly ahead of the single-model state of the art (SOTA) (76.3%), and even matching SOTA attained by an ensemble of multiple models (77.6%). Most importantly, these results are obtained using fewer compute resources compared to previous SOTA CNNs, e.g., 4x fewer than the pre-trained BiT models.

Vision Transformer matches or outperforms state-of-the-art CNNs on popular benchmarks. Left: Popular image classification tasks (ImageNet, including new validation labels ReaL, and CIFAR, Pets, and Flowers). Right: Average across 19 tasks in the VTAB classification suite.

Visualizations
To gain some intuition into what the model learns, we visualize some of its internal workings. First, we look at the position embeddings — parameters that the model learns to encode the relative location of patches — and find that ViT is able to reproduce an intuitive image structure. Each position embedding is most similar to others in the same row and column, indicating that the model has recovered the grid structure of the original images. Second, we examine the average spatial distance between one element attending to another for each transformer block. At higher layers (depths of 10-20) only global features are used (i.e., large attention distances), but the lower layers (depths 0-5) capture both global and local features, as indicated by a large range in the mean attention distance. By contrast, only local features are present in the lower layers of a CNN. These experiments indicate that ViT can learn features hard-coded into CNNs (such as awareness of grid structure), but is also free to learn more generic patterns, such as a mix of local and global features at lower layers, that can aid generalization.

Left: ViT learns the grid like structure of the image patches via its position embeddings. Right: The lower layers of ViT contain both global and local features, the higher layers contain only global features.

Summary
While CNNs have revolutionized computer vision, our results indicate that models tailor-made for imaging tasks may be unnecessary, or even sub-optimal. With ever-increasing dataset sizes, and the continued development of unsupervised and semi-supervised methods, the development of new vision architectures that train more efficiently on these datasets becomes increasingly important. We believe ViT is a preliminary step towards generic, scalable architectures that can solve many vision tasks, or even tasks from many domains, and are excited for future developments.

A preprint of our work as well as code and models are publically available.

Acknowledgements
We would like to thank our co-authors in Berlin, Zürich, and Amsterdam: Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, and Jakob Uszkoreit. We would like to thank Andreas Steiner for crucial help with infrastructure and open-sourcing, Joan Puigcerver and Maxim Neumann for work on large-scale training infrastructure, and Dmitry Lepikhin, Aravindh Mahendran, Daniel Keysers, Mario Lučić, Noam Shazeer, and Colin Raffel for useful discussions. Finally, we thank Tom Small for creating the Visual Transformer animation in this post.

Source: Google AI Blog


Can AI make me trendier?

As a software engineer and generally analytic type, I like to craft theories for everything. Theories on how to build software, how to stay productive, how to be creative...and even how to dress well. For help with that last one, I decided to hire a personal stylist. As it turned out, I was not my stylist’s first software engineer client. “The problem with you people in tech is that you’re always looking for some sort of theory of fashion,” she told me. “But there is no formula–it’s about taste.”

Unfortunately my stylist’s taste was a bit outside of my price range (I drew the line at a $300 hoodie). But I knew she was right. It’s true that computers (and maybe the people who program them) are better at solving problems with clear-cut answers than they are at navigating touchy-feely matters, like taste. Fashion trends are not set by data-crunching CPUs, they’re made by human tastemakers and fashionistas and their modern-day equivalents, social media influencers. 

I found myself wondering if I could build an app that combined trendsetters’ sense of style with AI’s efficiency to help me out a little. I started getting fashion inspiration from Instagram influencers who matched my style. When I saw an outfit I liked, I’d try to recreate it using items I already owned. It was an effective strategy, so I set out to automate it with AI.

First, I partnered up with one of my favorite programmers, who just so happened to also be an Instagram influencer, Laura Medalia (or codergirl_ on Instagram). With her permission, I uploaded all of Laura’s pictures to Google Cloud to serve as my outfit inspiration.
Image showing a screenshot of the Instagram profile of "codergirl."

Next, I painstakingly photographed every single item of clothing I owned, creating a digital archive of my closet.

Animated GIF showing a woman in a white room placing different clothing items on a mannequin and taking photos of them.

To compare my closet with Laura’s, I used Google Cloud Vision Product Search API, which uses computer vision to identify similar products. If you’ve ever seen a “See Similar Items” tab when you’re online shopping, it’s probably powered by a similar technology. I used this API to look through all of Laura’s outfits and all of my clothes to figure out which looks I could recreate. I bundled up all of the recommendations into a web app so that I could browse them on my phone, and voila: I had my own AI-powered stylist. It looks like this:

Animated GIF showing different screens that display items of clothing that can be paired together to create an outfit.

Thanks to Laura’s sense of taste, I have lots of new ideas for styling my own wardrobe. Here’s one look I was able to recreate:

Image showing two screens; on the left, a woman is standing in a room wearing a fashionable outfit with the items that make up that outfit in two panels below her. In the other is another woman, wearing a similar outfit.

If you want to see the rest of my newfound outfits, check out the YouTube video at the top of this post, where I go into all of the details of how I built the app, or read my blog post.

No, I didn’t end up with a Grand Unified Theory of Fashion—but at least I have something stylish to wear while I’m figuring it out.


Two speeds — fast and faster — which one’s right for you?



Back in August, Google Fiber announced our plan to test 2 Gig service in Huntsville and Nashville. Today, we’re excited to announce that 2 Gig is now widely available in those two cities.

New and existing Google Fiber customers in Huntsville and Nashville now have the choice between our proven 1 Gig for $70 a month — plenty of speed and capacity for everyone and their devices all at the same time — and our even faster 2 Gig for just $100 a month — ready for power users, the latest devices, and advanced smart homes that use lots of internet.

I’m especially excited to be able to share 2 Gig with our customers. Over the last few months, my family has been testing 2 Gig in our home. And what made 2 Gig right for us was how everything we could do with 1 Gig is now faster, wired and wireless, even when we’re doing a lot — which is pretty much all the time in our house. With 2 Gig, I’ve never worried about massive file downloads, even while I’m on a video call with my boss and my husband is on a Zoom call in the next room, and then, all of the sudden, our home music system asks for an update. I know we can easily handle it all at once.

My husband works in IT and he can set up new machines in minutes, with the progress bar on a new Mac saying, “this may take up to an hour” only to be wrapped up before he gets back from our kitchen with a cold soda. He’s saved the day with last minute file transfers that none of his coworkers could have done from home.

And it's not just for work. As an occasional gamer, I love that when I do get time to play, I’m not waiting for massive system updates to finish or worried about network lag, but just trying to hold my own competitively. Even my house gets in on 2 Gig. Our smart lights, Wi-Fi speakers, 4K TVs, sprinkler system, and Wi-Fi-connected pellet smoker are staying connected without competing with us or each other for bandwidth. And we’ve got the peace of mind that there won’t be any data caps to stop us from doing even more.

2 Gig comes with the Google Fiber Multi-Gig Router, which uses Wi-Fi 6, the latest Wi-Fi standard. While it’s true that right now, we only have a few devices that have Wi-Fi 6 capability, we’re making the most of the hefty Wi-Fi speed provided by the tri-band Mesh Extender. And we’re ready for whatever comes next. (Who knows? I think my dog might be jealous of the Wi-Fi feeder that spoils my daughter’s cat.)

1 Gig is great for my family, and 2 Gig just gives us more of what we love. So, whether your home needs 1 Gig fast for everyone or 2 Gig faster for a few more everyones, Google Fiber is happy to get the right internet for your household, at the right speed for you.

(And if you're not in Huntsville and Nashville, don't worry. You can still sign up to test 2 Gig and other products through our Trusted Tester program.)

Posted by Amanda Peterson, Product Marketing Manager




~~~

author: Amanda Peterson

title: Product Marketing Manager

category: product_news