Monthly Archives: December 2021
Our authors’ take on 2021
Source: Google Search Central Blog
Beta Channel Update for Desktop
The Beta channel has been updated to 97.0.4692.71 for Windows, Mac and Linux.
A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Source: Google Chrome Releases
2021 Year in Review: Google Quantum AI
Google’s Quantum AI team has had a productive 2021. Despite ongoing global challenges, we’ve made significant progress in our effort to build a fully error-corrected quantum computer, working towards our next hardware milestone of building an error-corrected quantum bit (qubit) prototype. At the same time, we have continued our commitment to realizing the potential of quantum computers in various applications. That's why we published results in top journals, collaborated with researchers across academia and industry, and expanded our team to bring on new talent and expertise.
An update on hardware
The Quantum AI team is determined to build an error-corrected quantum computer within the next decade, and to simultaneously use what we learn along the way to deliver helpful—and even transformational—quantum computing applications. This long-term commitment is expanded broadly into three key questions for our quantum hardware:
- Can we demonstrate that quantum computers can outperform the classical supercomputers of today in a specific task? We demonstrated beyond-classical computation in 2019.
- Can we build a prototype of an error-corrected qubit? In order to use quantum computers to their full potential, we will need to realize quantum error correction to overcome the noise that is present during our computations. As a key step in this direction, we aim to realize the primitives of quantum error correction by redundantly encoding quantum information across several physical qubits, demonstrating that such redundancy leads to an improvement over using individual physical qubits. This is our current target.
- Can we build a logical qubit which does not have errors for an arbitrarily long time? Logical qubits encode information redundantly across several physical qubits, and are able to reduce the impact of noise on the overall quantum computation. Putting together a few thousand logical qubits would allow us to realize the full potential of quantum computers for various applications.
Progress toward building an error-corrected qubit prototype
The distance between the noisy quantum computers of today and the fully error-corrected quantum computers of the future is vast. In 2021, we made significant progress in closing this gap by working toward building a prototype logical qubit whose errors are smaller than those of the physical qubits on our chips.
This work requires improvements across the entire quantum computing stack. We have made chips with better qubits, improved the methods that we use to package these chips to better connect them with our control electronics, and developed techniques to calibrate large chips with several dozens of qubits simultaneously.
These improvements culminated in two key results. First, we are now able to reset our qubits with high fidelity, allowing us to reuse qubits in quantum computations. Second, we have realized mid-circuit measurement that allows us to keep track of computation within quantum circuits. Together, the high-fidelity resets and mid-circuit measurements were used in our recent demonstration of exponential suppression of bit and phase flip errors using repetition codes, resulting in 100x suppression of these errors as the size of the code grows from 5 to 21 qubits.

Suppression of logical errors as the number of qubits in the repetition code is increased. As we increase the code size from 5 to 21 qubits, we see 100x reduction in logical. Image acknowledgement: Kevin Satzinger/Google Quantum AI
Repetition codes, an error correction tool, enable us to trade-off between resources (more qubits) and performance (lower error) which will be central in guiding our hardware research and development going forward. This year we showed how error decreases as we increase the number of included qubits for a 1-dimensional code. We are currently running experiments to extend these results to two-dimensional surface codes which will correct errors more comprehensively.
Applications of quantum computation
In addition to building quantum hardware, our team is also looking for clear margins of quantum advantage in real world applications. With our collaborators in academia and industry, we are exploring fields where quantum computers can provide significant speedups, with realistic expectations that error-corrected quantum computers will likely require better than quadratic speedups for meaningful improvements.
As always, our collaborations with academic and industry partners were invaluable in 2021. One notable collaboration with Caltech showed that, under certain conditions, quantum machines can learn about physical systems from exponentially fewer experiments than what is conventionally required. This novel method was validated experimentally using 40 qubits and 1300 quantum operations, demonstrating a substantial quantum advantage even with the noisy quantum processors we have today. This paves the way to more innovation in quantum machine learning and quantum sensing, with potential near-term use cases.
In collaboration with researchers at Columbia University, we combined one of the most powerful techniques for chemical simulation, Quantum Monte Carlo, with quantum computation. This approach surpasses previous methods as a promising quantum approach to ground state many-electron calculations, which are critical in creating new materials and understanding their chemical properties. When we run a component of this technique on a real quantum computer, we are able to double the size of prior calculations without sacrificing accuracy of the measurements, even in the presence of noise on a device with up to 16 qubits. The resilience of this method to noise is an indication of its potential for scalability even on today’s quantum computers.
We continue to study how quantum computers can be used to simulate quantum physical phenomena—as was most recently reflected in our experimental observation of a time crystal on a quantum processor (Ask a Techspert: What exactly is a time crystal?). This was a great moment for theorists, who’ve pondered the possibility of time crystals for nearly a century. In other work, we also explored the emergence of quantum chaotic dynamics by experimentally measuring out-of-time-ordered correlations on one of our quantum computers, which was done jointly with collaborators at the NASA Ames Research Center; and experimentally measuring the entanglement entropy of the ground state of the Toric code Hamiltonian by creating its eigenstates using shallow quantum circuits with collaborators at the Technical University of Munich.
Our collaborators contributed to, and even inspired, some of our most impactful research in 2021. Quantum AI remains committed to discovering and realizing meaningful quantum applications in collaboration with scientists and researchers from across the world in 2022 and beyond as we continue our focus on machine learning, chemistry, and many-body quantum physics.
You can find a list of all our publications here.
Continuing investment in the quantum computing ecosystem
This year, at Google’s annual developer conference, Google I/O, we reaffirmed our commitment to the roadmap and investments required to make a useful quantum computer within the decade. While we were busy growing in Santa Barbara, we also continue to support the enablement of researchers in the quantum community through our open source software. Our quantum programming framework, Cirq, continues to improve with contributions from the community. 2021 also saw the release of specialized tools in collaboration with partners in the ecosystem. Two examples of these are:
- The release of a new Fermionic Quantum Simulator for quantum chemistry applications in collaboration with QSimulate, taking advantage of the symmetry in quantum chemistry problems to provide efficient simulations.
- A significant upgrade to qsim which allows for simulation of noisy quantum circuits on high performance processors such as GPUs via Google Cloud, and qsim integration with NVIDIA’s cuQuantum SDK to enable qsim users to make the most of NVIDIA GPUs when developing quantum algorithms and applications.
We also released an open-source tool called stim, which provides a 10000x speedup when simulating error correction circuits.
You can access our portfolio of open-source software here.
Looking toward 2022

Resident quantum scientist Qubit the Dog taking part in a holiday sing-along led by team members Jimmy Chen and Ofer Naaman.
Through teamwork, collaboration, and some innovative science, we are excited about the progress that we have seen in 2021. We have big expectations for 2022 as we focus on progressing through our hardware milestones, the discovery of new quantum algorithms, and the realization of quantum applications on the quantum processors of today. To tackle our difficult mission, we are growing our team, building on our existing network of collaborators, and expanding our Santa Barbara campus. Together with the broader quantum community, we are excited to see the progress that quantum computing makes in 2022 and beyond.
Source: The Official Google Blog
Tools to help you tackle your New Year’s resolution
You always hear the standard New Year resolutions: Work out more. Run a marathon. Learn a new language. For me this year, it’s to learn three new party tricks (I’m optimistically hoping for more social interaction in 2022!). No matter what the goal is, it often feels that by February, I’ve lost some steam. Resolutions take time, and new habits and skills are (let’s admit) hard to build.
So this year, my New Year's resolution is to stick to a New Year’s resolution. So I did a little digging, and found a few tools that I have at my fingertips to get that resolution to stick.
First things first: Write down your goal
Don’t justthink about your resolution — write it down. If you live by your inbox, schedule send a January 1 New Year’s resolution email to yourself. What better way to kickstart the new year than with an email to your future self?
If you’re not into email, Google Keep is a great way to jot down resolution ideas. If you’re on the go when inspiration strikes, you can even create a Google Keep note with your voice.
And don’t forget good ol’ pen and paper. Recording something on paper is easy, and the physical movement of writing something down can make it stick in a certain way. So write it down, literally.
Next, create reminders
The hard part about keeping resolutions for me is changing my daily routine. So I decided to
break down my resolution into smaller goals, and set up check-ins on Google Calendar. Twice a month, I put aside time to learn a party trick (my first one is going to be rolling a coin across my knuckles), and half way through the year I set up a “dry run” performance with friends (whether that ends up being in-person or virtual) to keep myself accountable.
Aside from checkpoints, crossing items off a checklist also keeps me on track. So I further broke down my twice-a-month trick-learning efforts using Tasks. This means my smaller, bite-sized agenda items will show up everywhere, from Gmail to Google Slides (so I can’t ignore them!).

If you wrote down your resolution on Google Keep, that’s also a good place to create a to-do list and hit your smaller target goals on your way to your resolution. You can even set up timed reminders for each of the items to make sure you hit your goals.
Build satisfaction by tracking your progress
You can track your progress anywhere, like Keep or even Google Docs, but if you’re looking for more, try AppSheet . With AppSheet, you can build custom apps without any coding required. Need a custom app to track your workout progress? Looking for a journaling app on the go? AppSheet has a few templates you can try — or you can build your own if you want to get hyper-specific.
Make sure to reward yourself along the way
New habits and skills are hard to build, especially when you don’t see immediate results. So celebrating mini-milestones along the way (practiced 10 sessions ☑...rehearsed for my dry run ☑) help me stay motivated.
How you reward yourself is up to you — maybe it’s taking a day for self care, or simply exchanging words of encouragement with your friends and family – a little kudos goes a long way. And if at any point along the way toward your goal you begin to feel a little weary, try some of the advice from our resilience expert at Google, who talks about breaking tasks into smaller challenges that are easier to tackle.
Here’s to 2022 — and sticking with our New Year resolutions.
Source: The Official Google Blog
Chrome Beta for Android Update
Hi everyone! We've just released Chrome Beta 97 (97.0.4692.70) for Android: it's now available on Google Play.
You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.
If you find a new issue, please let us know by filing a bug.
Krishna Govind
Google Chrome
Source: Google Chrome Releases
Prediction Framework, a time saver for Data Science prediction projects
Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil
Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.
Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.
Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?
Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.
Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.
The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.
Prediction Framework Architecture
To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.
Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:
- Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
- Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
- Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
- Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
- Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.
In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.
For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.
Source: Google Developers Blog
Prediction Framework, a time saver for Data Science prediction projects
Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil
Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.
Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.
Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?
Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.
Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.
The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.
Prediction Framework Architecture
To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.
Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:
- Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
- Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
- Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
- Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
- Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.
In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.
For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.
Source: Google Developers Blog
“New normal” and other words we used a lot this year
There’s a lot to think about at the end of each year. What we accomplished, what we didn’t — what we made time for, or what we took a break from. At Google, the Search team looks at what sort of questions the world asked, and what answers we really needed. And of course, what momentary trends completely captivated us (looking at you, “tiktok pasta”).
As a writer, something I’ve been thinking about in the last few weeks of 2021 are the words we used this year. 2020 was the year of “now more than ever,” a phrase that began to feel meaningless as the “now more than ever”-worthy moments kept coming (and admittedly, as we all kept calling them that). If 2020 was the year of “now more than ever,” then what was 2021?
Once again, I turned to Ngrams, a Google tool launched in 2009 by part of the Google Books team. Ngrams shows how books and other pieces of literature have used certain words or phrases over time, and you can chart their popularity throughout the years. One caveat: Ngrams currently tracks data from 1800 to 2019 — prior to 2020, Ngrams’ data ranged from 1800 to 2012, but the team added a huge new dataset about two years ago. So while it remains to be seen how some sayings took over writing throughout 2020 and 2021, I wanted to see how the words we’re hearing and saying and writing today have shown up over time.
My first nomination: “new normal.” This is a phrase that I personally have heard…well, now more than ever, I suppose. This isn’t the first time “new normal” appeared in the lexicon, though: You can see it began to see small bursts of usage in literature and other writing in the mid-19th century — though if you use Ngrams to see some of the examples of how it showed up, “new normal” was often in reference to types of academic institutions. And then “new normal” just sort of faded away…until the aughts, when it dramatically rose. Michael Ballback, who works on Google Books, told me that a lot of post-2000s data added comes from e-books, whereas older data mostly came from libraries, so perhaps this could account for some of the jump. In any case, today it now completely permeates our writing. (Which raises the question: Is there such a thing as normal if they’re constantly new?)

Then of course, I thought of “vaccine,” which actually began its Ngrams debut on a high, falling sharply between 1800 and 1813…only to rise again in the early to mid 1900s, when many scholarly articles were published about things like typhoid, cholera and pertussis vaccinations. Then it goes up and down, up and down, to an all-time high in 2003. It’s since slightly fallen off — but remember, Ngrams’ data goes up until 2019, so I have my own assumptions about how it’s fared the past two years.

Google Books Ngrams Viewer chart showing the use over time of the phrase “vaccine,” which rises consistently beginning in 1900.
Lastly, I took a look at “hybrid.” Obviously it’s a word that’s been around for awhile (according to Ngrams, it’s been in use since at least the year 1800, which is how far the tool’s data goes back) and has gently, steadily risen since. It spiked in the early ‘80s, though, but in browsing snippets from Google Books from this time period, it was used similarly to how it is now. Later in the aughts, we start seeing it used to describe cars, and today…well, you probably already know.

What “hybrid” means hasn’t really changed, but it’s the situations we’re applying it to that have — there’s a much wider scope of daily life that falls under this category. “Hybrid” didn’t change, but how we live has. 2020 felt in many ways like a pause on life, and this year we began finding new, creative ways to adapt — a little of our old methods, mixed with the new. And that, to me, feels distinctly 2021.
Source: The Official Google Blog
Looking back on an interesting year
2021 is coming to a close, and what a year it has been.
As the pandemic has continued to shape what normal looks like and we all figure out how to work, learn, connect, and be in the world now, quality internet and ensuring access for more people has become a central focus not just for Google Fiber, but for many of our communities across the country. That’s a huge opportunity and responsibility for us and we’re working to make 2022 and beyond even more connected.
Taking it farther faster
In 2021, we built to more households than in any other year. Many of the communities we announced this year around the country already have service, including South Salt Lake, Holladay, Taylorsville, Millcreek and North Salt Lake in Utah; Concord and Matthews in North Carolina; and Leon Valley in Texas.
While we still have a lot of work to do in many of our communities to bring access to as many people as possible, we continue to make our build processes more efficient and less disruptive. This will be a focus area for our teams across the country leading into 2022, as we expect to expand even more next year.
More internet for everything
In 2021, the rollout of our new 2 Gig service demonstrated just how much demand for internet had increased. It wasn’t just those working at home and gamers opting to up to double their download speeds (although they liked it, too!), we saw households of all varieties taking advantage of the opportunity to get more out of their internet.
With increased demand across all our products, we worked to ensure our network was there when our customers needed it, increasing capacity across all points of the network right up to improving the in-home Wi-Fi experience. In 2022, we’ll continue to work to make our customers’ fast, reliable internet even better.
Helping communities thrive
While a lot of great things happened in 2021, the pandemic continued to pose challenges for many of our customers and our cities. With the internet’s increasingly central role in our daily lives, we saw many more organizations stepping into digital equity work. To meet that demand, we expanded our partnership with NTEN to support 11 fellows in eight Google Fiber cities. We’ve continued to work with partners across the country to help more people access the internet and develop the skills to take advantage of online opportunities.
This year, thousands of people participated in Google Fiber-funded programs in our communities through over 170 different local organizations across the country, from trainings to device distributions to STEM events.
We also provided gigabit internet at no cost to more than 440 organizations this year to allow them to meet the needs of their clients and their work in the community through our Community Connections program, and provided gigabit internet at no cost to over 3,500 households through the Gigabit Community program.
Growing the Google Fiber team
This year, we’ve grown both our central and our local city teams to help keep up with our expanded efforts across the country. We recently launched a new Google Fiber careers site to help candidates find us, and we’re still hiring! We have hundreds of open roles, so if all this sounds like an interesting, rewarding way to make a difference, then maybe you should join us. One thing is certain, 2022 is not going to be dull around here.
Posted by the Google Fiber Team
~~~~
Author: Google Fiber Team
Title:
category: company_news