The Stable channel has been updated to 120.0.6099.199 for Mac,Linux and 120.0.6099.199/200 to Windows which will roll out over the coming days/weeks. A full list of changes in this build is available in theLog.
The Extended Stable channel has been updated to 120.0.6099.199 for Mac and 120.0.6099.200 for Windows which will roll out over the coming days/weeks.
Security Fixes and Rewards
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.
This update includes 6 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.
[$15000][1501798] High CVE-2024-0222: Use after free in ANGLE. Reported by Toan (suto) Pham of Qrious Secure on 2023-11-13
[$15000][1505009] High CVE-2024-0223: Heap buffer overflow in ANGLE. Reported by Toan (suto) Pham and Tri Dang of Qrious Secure on 2023-11-24
[$10000][1505086] High CVE-2024-0224: Use after free in WebAudio. Reported by Huang Xilin of Ant Group Light-Year Security Lab on 2023-11-25
[$TBD][1506923] High CVE-2024-0225: Use after free in WebGPU. Reported by Anonymous on 2023-12-01
We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.
As usual, our ongoing internal security work was responsible for a wide range of fixes:
[1515353] Various fixes from internal audits, fuzzing and other initiatives
Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
I'm sure you're all wondering why we have gathered you here today. That's right: it's the
end of year blog post in which we go over what happened in 2023 on Google Search Central,
and have some fun along the way!
The Stable channel is being updated to 119.0.6045.214 (Platform version: 15633.72.0) for most ChromeOS devices and will be rolled out over the next few days. This build contains a number of bug fixes and security updates.
If you find new issues, please let us know one of the following ways:
Posted by Jeff Dean, Chief Scientist, Google DeepMind & Google Research, Demis Hassabis, CEO, Google DeepMind, and James Manyika, SVP, Google Research, Technology & Society
This has been a year of incredible progress in the field of Artificial Intelligence (AI) research and its practical applications.
As ongoing research pushes AI even farther, we look back to our perspective published in January of this year, titled “Why we focus on AI (and to what end),” where we noted:
We are committed to leading and setting the standard in developing and shipping useful and beneficial applications, applying ethical principles grounded in human values, and evolving our approaches as we learn from research, experience, users, and the wider community.
We also believe that getting AI right — which to us involves innovating and delivering widely accessible benefits to people and society, while mitigating its risks — must be a collective effort involving us and others, including researchers, developers, users (individuals, businesses, and other organizations), governments, regulators, and citizens.
We are convinced that the AI-enabled innovations we are focused on developing and delivering boldly and responsibly are useful, compelling, and have the potential to assist and improve lives of people everywhere — this is what compels us.
In this Year-in-Review post we’ll go over some of Google Research's and Google DeepMind’s efforts putting these paragraphs into practice safely throughout 2023.
Advances in products & technologies
This was the year generative AI captured the world’s attention, creating imagery, music, stories, and engaging conversation about everything imaginable, at a level of creativity and a speed almost implausible a few years ago.
In February, we first launchedBard, a tool that you can use to explore creative ideas and explain things simply. It can generate text, translate languages, write different kinds of creative content and more.
In May, we watched the results of months and years of our foundational and applied work announced on stage at Google I/O. Principally, this included PaLM 2, a large language model (LLM) that brought together compute-optimal scaling, an improved dataset mixture, and model architecture to excel at advanced reasoning tasks.
By fine-tuning and instruction-tuning PaLM 2 for different purposes, we were able to integrate it into numerous Google products and features, including:
An update to Bard, which enabled multilingual capabilities. Since its initial launch, Bard is now available in more than 40 languages and over 230 countries and territories, and with extensions, Bard can find and show relevant information from Google tools used every day — like Gmail, Google Maps, YouTube, and more.
Search Generative Experience (SGE), which uses LLMs to reimagine both how to organize information and how to help people navigate through it, creating a more fluid, conversational interaction model for our core Search product. This work extended the search engine experience from primarily focused on information retrieval into something much more — capable of retrieval, synthesis, creative generation and continuation of previous searches — while continuing to serve as a connection point between users and the web content they seek.
MusicLM, a text-to-music model powered by AudioLM and MuLAN, which can make music from text, humming, images or video and musical accompaniments to singing.
Duet AI, our AI-powered collaborator that provides users with assistance when they use Google Workspace and Google Cloud. Duet AI in Google Workspace, for example, helps users write, create images, analyze spreadsheets, draft and summarize emails and chat messages, and summarize meetings. Duet AI in Google Cloud helps users code, deploy, scale, and monitor applications, as well as identify and accelerate resolution of cybersecurity threats.
In June, following last year’s release of our text-to-image generation model Imagen, we released Imagen Editor, which provides the ability to use region masks and natural language prompts to interactively edit generative images to provide much more precise control over the model output.
Later in the year, we released Imagen 2, which improved outputs via a specialized image aesthetics model based on human preferences for qualities such as good lighting, framing, exposure, and sharpness.
In October, we launched a feature that helps people practice speaking and improve their language skills. The key technology that enabled this functionality was a novel deep learning model developed in collaboration with the Google Translate team, called Deep Aligner. This single new model has led to dramatic improvements in alignment quality across all tested language pairs, reducing average alignment error rate from 25% to 5% compared to alignment approaches based on Hidden Markov models (HMMs).
Then in December, we launched Gemini, our most capable and general AI model. Gemini was built to be multimodal from the ground up across text, audio, image and videos. Our initial family of Gemini models comes in three different sizes, Nano, Pro, and Ultra. Nano models are our smallest and most efficient models for powering on-device experiences in products like Pixel. The Pro model is highly-capable and best for scaling across a wide range of tasks. The Ultra model is our largest and most capable model for highly complex tasks.
In a technical report about Gemini models, we showed that Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in LLM research and development. With a score of 90.04%, Gemini Ultra was the first model to outperform human experts on MMLU, and achieved a state-of-the-art score of 59.4% on the new MMMU benchmark.
Building on AlphaCode, the first AI system to perform at the level of the median competitor in competitive programming, we introduced AlphaCode 2 powered by a specialized version of Gemini. When evaluated on the same platform as the original AlphaCode, we found that AlphaCode 2 solved 1.7x more problems, and performed better than 85% of competition participants
At the same time, Bard got its biggest upgrade with its use of the Gemini Pro model, making it far more capable at things like understanding, summarizing, reasoning, coding, and planning. In six out of eight benchmarks, Gemini Pro outperformed GPT-3.5, including in MMLU, one of the key standards for measuring large AI models, and GSM8K, which measures grade school math reasoning. Gemini Ultra will come to Bard early next year through Bard Advanced, a new cutting-edge AI experience.
Gemini Pro is also available on Vertex AI, Google Cloud’s end-to-end AI platform that empowers developers to build applications that can process information across text, code, images, and video. Gemini Pro was also made available in AI Studio in December.
To best illustrate some of Gemini’s capabilities, we produced a series of short videos with explanations of how Gemini could:
In addition to our advances in products and technologies, we’ve also made a number of important advancements in the broader fields of machine learning and AI research.
Expanding the versatility of models requires the ability to perform higher-level and multi-step reasoning. This year, we approached this target following several research tracks. For example, algorithmic prompting is a new method that teaches language models reasoning by demonstrating a sequence of algorithmic steps, which the model can then apply in new contexts. This approach improves accuracy on one middle-school mathematics benchmark from 25.9% to 61.1%.
By providing algorithmic prompts, we can teach a model the rules of arithmetic via in-context learning.
In the domain of visual question answering, in a collaboration with UC Berkeley researchers, we showed how we could better answer complex visual questions (“Is the carriage to the right of the horse?”) by combining a visual model with a language model trained to answer visual questions by synthesizing a program to perform multi-step reasoning.
We are now using a general model that understands many aspects of the software development life cycle to automatically generate code review comments, respond to code review comments, make performance-improving suggestions for pieces of code (by learning from past such changes in other contexts), fix code in response to compilation errors, and more.
In a multi-year research collaboration with the Google Maps team, we were able to scale inverse reinforcement learning and apply it to the world-scale problem of improving route suggestions for over 1 billion users. Our work culminated in a 16–24% relative improvement in global route match rate, helping to ensure that routes are better aligned with user preferences.
We also continue to work on techniques to improve the inference performance of machine learning models. In work on computationally-friendly approaches to pruning connections in neural networks, we were able to devise an approximation algorithm to the computationally intractable best-subset selection problem that is able to prune 70% of the edges from an image classification model and still retain almost all of the accuracy of the original.
In work on accelerating on-device diffusion models, we were also able to apply a variety of optimizations to attention mechanisms, convolutional kernels, and fusion of operations to make it practical to run high quality image generation models on-device; for example, enabling “a photorealistic and high-resolution image of a cute puppy with surrounding flowers” to be generated in just 12 seconds on a smartphone.
Advances in capable language and multimodal models have also benefited our robotics research efforts. We combined separately trained language, vision, and robotic control models into PaLM-E, an embodied multi-modal model for robotics, and Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalized instructions for robotic control.
RT-2 architecture and training: We co-fine-tune a pre-trained vision-language model on robotics and web data. The resulting model takes in robot camera images and directly predicts actions for a robot to perform.
Designing efficient, robust, and scalable algorithms remains a high priority. This year, our work included: applied and scalable algorithms, market algorithms, system efficiency and optimization, and privacy.
We introduced AlphaDev, an AI system that uses reinforcement learning to discover enhanced computer science algorithms. AlphaDev uncovered a faster algorithm for sorting, a method for ordering data, which led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.
The TPUGraphs dataset has 44 million graphs for ML program optimization.
We developed a new load balancing algorithm for distributing queries to a server, called Prequal, which minimizes a combination of requests-in-flight and estimates the latency. Deployments across several systems have saved CPU, latency, and RAM significantly. We also designed a new analysis framework for the classical caching problem with capacity reservations.
Heatmaps of normalized CPU usage transitioning to Prequal at 08:00.
We continued optimizing Google’s large embedding models (LEMs), which power many of our core products and recommender systems. Some new techniques include Unified Embedding for battle-tested feature representations in web-scale ML systems and Sequential Attention, which uses attention mechanisms to discover high-quality sparse model architectures during training.
In Project Green Light, we partnered with 13 cities around the world to help improve traffic flow at intersections and reduce stop-and-go emissions. Early numbers from these partnerships indicate a potential for up to 30% reduction in stops and up to 10% reduction in emissions.
In our contrails work, we analyzed large-scale weather data, historical satellite images, and past flights. We trained an AI model to predict where contrails form and reroute airplanes accordingly. In partnership with American Airlines and Breakthrough Energy, we used this system to demonstrate contrail reduction by 54%.
Contrails detected over the United States using AI and GOES-16 satellite imagery.
Finally, we continued to develop better models for weather prediction at longer time horizons. Improving on MetNet and MetNet-2, in this year’s work on MetNet-3, we now outperform traditional numerical weather simulations up to twenty-four hours. In the area of medium-term, global weather forecasting, our work on GraphCast showed significantly better prediction accuracy for up to 10 days compared to HRES, the most accurate operational deterministic forecast, produced by the European Centre for Medium-Range Weather Forecasts (ECMWF). In collaboration with ECMWF, we released WeatherBench-2, a benchmark for evaluating the accuracy of weather forecasts in a common framework.
A selection of GraphCast’s predictions rolling across 10 days showing specific humidity at 700 hectopascals (about 3 km above surface), surface temperature, and surface wind speed.
Health and the life sciences
The potential of AI to dramatically improve processes in healthcare is significant. Our initial Med-PaLM model was the first model capable of achieving a passing score on the U.S. medical licensing exam. Our more recent Med-PaLM 2 model improved by a further 19%, achieving an expert-level accuracy of 86.5%. These Med-PaLM models are language-based, enable clinicians to ask questions and have a dialogue about complex medical conditions, and are available to healthcare organizations as part of MedLM through Google Cloud.
In the same way our general language models are evolving to handle multiple modalities, we have recently shown research on a multimodal version of Med-PaLM capable of interpreting medical images, textual data, and other modalities, describing a path for how we can realize the exciting potential of AI models to help advance real-world clinical care.
Med-PaLM M is a large multimodal generative model that flexibly encodes and interprets biomedical data including clinical language, imaging, and genomics with the same model weights.
AI systems can also discover completely new signals and biomarkers in existing forms of medical data. In work on novel biomarkers discovered in retinal images, we demonstrated that a number of systemic biomarkers spanning several organ systems (e.g., kidney, blood, liver) can be predicted from external eye photos. In other work, we showed that combining retinal images and genomic information helps identify some underlying factors of aging.
In the genomics space, we worked with 119 scientists across 60 institutions to create a new map of the human genome, or pangenome. This more equitable pangenome better represents the genomic diversity of global populations. Building on our ground-breaking AlphaFold work, our work on AlphaMissense this year provides a catalog of predictions for 89% of all 71 million possible missense variants as either likely pathogenic or likely benign.
Examples of AlphaMissense predictions overlaid on AlphaFold predicted structures (red – predicted as pathogenic; blue – predicted as benign; grey – uncertain). Red dots represent known pathogenic missense variants, blue dots represent known benign variants. Left: HBB protein. Variants in this protein can cause sickle cell anaemia. Right: CFTR protein. Variants in this protein can cause cystic fibrosis.
We also shared an update on progress towards the next generation of AlphaFold. Our latest model can now generate predictions for nearly all molecules in the Protein Data Bank (PDB), frequently reaching atomic accuracy. This unlocks new understanding and significantly improves accuracy in multiple key biomolecule classes, including ligands (small molecules), proteins, nucleic acids (DNA and RNA), and those containing post-translational modifications (PTMs).
On the neuroscience front, we announced a new collaboration with Harvard, Princeton, the NIH, and others to map an entire mouse brain at synaptic resolution, beginning with a first phase that will focus on the hippocampal formation — the area of the brain responsible for memory formation, spatial navigation, and other important functions.
Quantum computing
Quantum computers have the potential to solve big, real-world problems across science and industry. But to realize that potential, they must be significantly larger than they are today, and they must reliably perform tasks that cannot be performed on classical computers.
This year, we took an important step towards the development of a large-scale, useful quantum computer. Our breakthrough is the first demonstration of quantum error correction, showing that it’s possible to reduce errors while also increasing the number of qubits. To enable real-world applications, these qubit building blocks must perform more reliably, lowering the error rate from ~1 in 103 typically seen today, to ~1 in 108.
Responsible AI research
Design for Responsibility
Generative AI is having a transformative impact in a wide range of fields including healthcare, education, security, energy, transportation, manufacturing, and entertainment. Given these advances, the importance of designing technologies consistent with our AI Principles remains a top priority. We also recently published case studies of emerging practices in society-centered AI. And in our annual AI Principles Progress Update, we offer details on how our Responsible AI research is integrated into products and risk management processes.
Proactive design for Responsible AI begins with identifying and documenting potential harms. For example, we recently introduced a three-layered context-based framework for comprehensively evaluating the social and ethical risks of AI systems. During model design, harms can be mitigated with the use of responsible datasets.
We initiated several efforts to improve safety and transparency about online content. For example, we introduced SynthID, a tool for watermarking and identifying AI-generated images. SynthID is imperceptible to the human eye, doesn't compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colors, and saving with various lossy compression schemes.
We also launched About This Image to help people assess the credibility of images, showing information like an image's history, how it's used on other pages, and available metadata about an image. And we explored safety methods that have been developed in other fields, learning from established situations where there is low-risk tolerance.
SynthID generates an imperceptible digital watermark for AI-generated images.
Privacy remains an essential aspect of our commitment to Responsible AI. We continued improving our state-of-the-art privacy preserving learning algorithm DP-FTRL, developed the DP-Alternating Minimization algorithm (DP-AM) to enable personalized recommendations with rigorous privacy protection, and defined a new general paradigm to reduce the privacy costs for many aggregation and learning tasks. We also proposed a scheme for auditing differentially private machine learning systems.
On the applications front we demonstrated that DP-SGD offers a practical solution in the large model fine-tuning regime and showed that images generated by DP diffusion models are useful for a range of downstream tasks. We proposed a new algorithm for DP training of large embedding models that provides efficient training on TPUs without compromising accuracy.
We also teamed up with a broad group of academic and industrial researchers to organize the first Machine Unlearning Challenge to address the scenario in which training images are forgotten to protect the privacy or rights of individuals. We shared a mechanism for extractable memorization, and participatory systems that give users more control over their sensitive data.
Our work in adversarial testing engaged community voices from historically marginalized communities. We partnered with groups such as the Equitable AI Research Round Table (EARR) to ensure we represent the diverse communities who use our models and engage with external users to identify potential harms in generative model outputs.
We established a dedicated Google AI Red Team focused on testing AI models and products for security, privacy, and abuse risks. We showed that attacks such as “poisoning” or adversarial examples can be applied to production models and surface additional risks such as memorization in both image and text generative models. We also demonstrated that defending against such attacks can be challenging, as merely applying defenses can cause other security and privacy leakages. We also introduced model evaluation for extreme risks, such as offensive cyber capabilities or strong manipulation skills.
Democratizing AI though tools and education
As we advance the state-of-the-art in ML and AI, we also want to ensure people can understand and apply AI to specific problems. We released MakerSuite (now Google AI Studio), a web-based tool that enables AI developers to quickly iterate and build lightweight AI-powered apps. To help AI engineers better understand and debug AI, we released LIT 1.0, a state-of-the-art, open-source debugger for machine learning models.
Colab, our tool that helps developers and students access powerful computing resources right in their web browser, reached over 10 million users. We’ve just added AI-powered code assistance to all users at no cost — making Colab an even more helpful and integrated experience in data and ML workflows.
One of the most used features is “Explain error” — whenever the user encounters an execution error in Colab, the code assistance model provides an explanation along with a potential fix.
To ensure AI produces accurate knowledge when put to use, we also recently introduced FunSearch, a new approach that generates verifiably true knowledge in mathematical sciences using evolutionary methods and large language models.
Google has spearheaded an industry-wide effort to develop AI safety benchmarks under the MLCommons standards organization with participation from several major players in the generative AI space including OpenAI, Anthropic, Microsoft, Meta, Hugging Face, and more. Along with others in the industry we also co-founded the Frontier Model Forum (FMF), which is focused on ensuring safe and responsible development of frontier AI models. With our FMF partners and other philanthropic organizations, we launched a $10 million AI Safety Fund to advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models.
The items highlighted in this post are a small fraction of the research work we have done throughout the last year. Find out more at the Google Research and Google DeepMind blogs, and our list of publications.
Future vision
As multimodal models become even more capable, they will empower people to make incredible progress in areas from science to education to entirely new areas of knowledge.
Progress continues apace, and as the year advances, and our products and research advance as well, people will find more and interesting creative uses for AI.
If pursued boldly and responsibly, we believe that AI can be a foundational technology that transforms the lives of people everywhere — this is what excites us!
Hi, everyone! We've just released Chrome 120 (120.0.6099.144) for Android: it'll become available on Google Play over the next few days.
This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.
Android releases contain the same security fixes as their corresponding Desktop (Windows: 120.0.6099.129/.130; Mac & Linux: 120.0.6099.129) unless otherwise noted.
Image: Te Rito journalism cadets learning at Ngā Kete Wānanga Marae marae in September 2023
For over twenty years the Google News Initiative has worked to help local news organisations adapt to digital transformation, and 2023 was no different. Our goal, globally and here in Aotearoa, is to help news media companies as they build sustainable businesses, connect with readers and engage audiences in a digital environment.
In the past year, as outlined below, our programmes in New Zealand have engaged dozens of publishers and more than 100 journalists, providing critical information on how newsrooms can use digital technology to find and tell stories in new ways, reach larger audiences and generate more revenues.
Google News Showcase
Google News Showcase, our curated online news experience and licensing program, launched in Aotearoa in 2022, provides news publishers with more opportunities to create connections with their readers. Across a range of large, small, regional and ethnically diverse publishers, 48 individual mastheads are now on News Showcase, representing the vast majority of news publishers in New Zealand.
Most recently Newshub, New Zealand's independent broadcast and digital news service joined the program, with Sarah Bristow, Senior Director News, Warner Bros. Discovery ANZ noting: “This partnership is a significant step for Newshub, ensuring our brilliant reporting continues to reach audiences where they are, and helping to create a sustainable future for local journalism. We are very pleased to have reached an agreement with Google.”
Advancing quality journalism
Training Kiwi journalists
We partnered with Telum Media to deliver eight open training sessions around the country, free of charge, to share the latest tools and processes to support digital reporting. In the last year more than 100 Kiwi journalists joined workshops run by local experts including training on Google’s research tool, Pinpoint, to support journalists’ reporting and creative data visualisation techniques. Journalists interested in future training sessions in 2024 can sign up here.
Supporting misinformation tracking & media literacy
It’s critical we can all differentiate quality information from misinformation in the moments that matter. New Zealand headed to the polls in October 2023, and with a wide range of political information sources readily available online, we partnered with CrossCheck at RMIT FactLab to launch immersive and interactive training events that help newsrooms and community media learn skills to analyse online information during the election period. More than 200 journalists, media executives, media community leaders and students took part in 12 sessions in New Zealand, with two of these facilitated in te reo Māori and one in Mandarin.
For the second consecutive year, Squiz Kids rolled out its media literacy module to give children of 8 to12 years the skills to decide which information sources are trustworthy, and which are not. The eight-part teaching resource, Newshounds, created in partnership with Google was aligned with the New Zealand curriculum and used by 72 Kiwi classrooms in 2023.
Supporting Diversity, Equity and Inclusion
At Ngā Kete Wānanga Marae marae, 12 Te Rito journalism cadets from diverse backgrounds participated in the second annual Te Rito Training Camp, a digital journalism camp founded on the Google News Initiative training curriculum. Cadets met with some of the leading journalists from New Zealand and Australia to learn investigative techniques, cultural history, digital skills and resilience training. For the first time, 8 Australian Indigenous journalists and three journalists from the Pacific joined the New Zealand based Pasifika and Māori cadets at the camp.
In July, Google hosted the inaugural Tagata Moana Media Fono at our offices in Tāmaki Makaurau Auckland, organised by Pacific Media Network, bringing together all parts of the Pasifika media community.
Strengthening and evolving publisher business models
The GNI Advertising Lab Series
The Google News Initiative's major training program for 2023- 2024 is the nine-month Advertising Lab. The Lab's aims to equip small and mid-sized news organisations with the latest information and support to improve their understanding, strategy, and infrastructure as it relates to digital advertising with a view to maximise their site performance and advertising revenue. Facilitated by Google and delivered by A&A Digital Services, a Google Certified Publishing Partner and Google Ad Manager 360 Platform Partner, the lab supported more than 25 participants from 10 New Zealand news organisations, who will receive additional individual implementation support in 2024 to assist in their digital advertising performance.
The GNI Fundamentals Lab
The Fundamentals Lab is a collaborative three-month lab in which Google reviews news publishers’ sites and shares practical, easy-to-follow steps to grow their audience, expand their ad revenues and increase reader revenues. The Lab delivers learning sessions and working group sessions, as well as bespoke audits. In 2023, 13 participants from ten local news organisations participated in the Fundamentals Lab.
FT Strategies Program: Exec North Star & Digital Business Models Workshops
Kiwi news publishers participated in our Google News Initiative programmes facilitated by FT Strategies including:
The Exec North Star Workshop Series, a two-month intensive programme aimed at publishers with a strong online presence, with participation from senior executives, aiming to define and accelerate progress towards a single ambitious organisation goal, using the FT’s proprietary North Star methodology to accelerate digital growth and strengthen commercial sustainability.
The Digital Business Models Workshop Series, a two-month program designed to help publishers at the beginning of their digital reader revenue journey understand and identify reader revenue models in evolving their digital businesses.
Empowering news organisations through technological innovation
Google is focused on partnering with news organisations on initiatives that further their digital transformation. One such partnership is with NZME and we’re pleased to share these examples of how we’ve supported their effort to promote original, trusted journalism across NZME’s digital titles.
As we embark on 2024, we’re proud of the continued work to partner with the New Zealand news industry, to contribute to a thriving news ecosystem that seeks to benefit all New Zealanders.
Post content
Posted by Caroline Rainsford, Country Director Google New Zealand
The Stable channel has been updated to 120.0.6099.129 for Mac,Linux and 120.0.6099.129/130 to Windows which will roll out over the coming days/weeks. A full list of changes in this build is available in the nulllog.
The Extended Stable channel has been updated to 120.0.6099.129 for Mac and 120.0.6099.130 for Windows which will roll out over the coming days/weeks.
Security Fixes and Rewards
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.
This update includes 1 security fix. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.
[$NA][1513170] High CVE-2023-7024: Heap buffer overflow in WebRTC. Reported by Clément Lecigne and Vlad Stolyarov of Google's Threat Analysis Group on 2023-12-19
We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.
Google is aware that an exploit for CVE-2023-7024 exists in the wild.
Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.