Google and Minister Judith Collins co-host “an evening of AI” at Parliament

Government officials, business leaders and Google reps discussed AI’s potential to build a brighter future for New Zealand

Aug 22 - Google has reaffirmed its passion to support the New Zealand government in harnessing the power of AI to build a brighter future for the country. This was expressed during high-level discussions held yesterday in Wellington with Honourable Judith Collins, Minister for Digitising Government, and other key government officials.

At the event co-hosted by Minister Collins and Google New Zealand, a delegation of Google representatives, led by Country Director Caroline Rainsford, and including Urs Hölzle, one of Google’s earliest employees, shared their insights on AI's potential to drive economic growth, innovation, and societal progress. In attendance were key government figures including Paul James (Government Chief Digital Officer). 

Google’s visit to Parliament aimed to showcase the exciting potential of AI to bring positive change to New Zealand. As a leader in AI innovation, Google also highlighted the company’s readiness to support this journey, while stressing the need for proactive engagement from government agencies to fully realise these opportunities.

Caroline Rainsford addresses the crowd at last night's Hui

Caroline Rainsford, Country Director of Google New Zealand, says: "The energy at Parliament House was palpable. There was definitely a shared excitement about AI's potential to transform New Zealand. From revolutionising healthcare to personalising education, the possibilities are immense.”

The discussions also highlighted the importance of smart regulations and collaborative efforts between the public and private sectors, to ensure the responsible and beneficial development of AI in New Zealand.

Rainsford adds: “Minister Collin’s optimistic approach to AI resonates with our vision. With our strong local presence, AI expertise, Cloud tools, and more, Google is ready to support the government’s vision for a digital New Zealand. We’re confident that we can help the country realise its AI aspirations with action and proactive engagement from government agencies."

Chrome for Android Update

Hi, everyone! We've just released Chrome 128 (128.0.6613.88) for Android . It'll become available on Google Play over the next few days. 

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Android releases contain the same security fixes as their corresponding Desktop (Windows & Mac: 128.0.6613.84/85 and Linux:128.0.6613.84) unless otherwise noted.


Harry Souders
Google Chrome

Chrome Stable for iOS Update

Hi everyone! We've just released Chrome Stable 128 (128.0.6613.92) for iOS; it'll become available on App Store in the next few hours.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Harry Souders
Google Chrome

Dev Channel Update for ChromeOS / ChromeOS Flex

The Dev channel is being updated to OS version: 16002.2.0, Browser version: 129.0.6668.0 for most ChromeOS devices.

If you find new issues, please let us know one of the following ways:

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome
  4. Interested in switching channels? Find out how.

Matt Nelson,

Google ChromeOS

Google Meet increases ultra-low latency live streaming support to 100,000 viewers in distributed audiences

What’s changing

For select Google Workspace editions*, we’re pleased to announce that the Google Meet ultra-low latency viewing experience for live streamed meetings will now support up to 100,000 viewers. This gives organizations the flexibility to reach a wider audience with improved user experience at lower bandwidth consumption. In order to receive the ultra-low latency experience, no more than 25,000 viewers can be connected to a single regional data center at a time


Who’s impacted 

Admins and end users 


Why it’s important 

Live streaming is a critical tool for large audiences, such as town-halls or keynote events. Increasing support for the low-latency live streaming experience from 25,000 viewers to 100,000 users helps our customers reach a wider audience, while their users benefit from several functional and quality improvements, such as:

  • A virtually lag-free experience
  • Significantly increased speaker video resolution (up to 720p per speaker)
  • Shared content and presentations shown up to 2880x1800
  • Improved automatic camera cuts that focus on the most relevant speakers & content 
  • Audience interaction through emoji reactions, polls and Q&A, and more.


Additional details

Enterprise Content Delivery Network (eCDN) for Google Meet
If large groups of your audience are connecting from a single network location or a shared gateway, you may benefit from using eCDN for Meet to get full media quality with substantial network bandwidth savings. For more information on eCDN, see this post on the Workspace Updates blog and visit our Help Center.    


Viewers can now join ultra-low latency live streams from Google Meet room hardware
The Google Meet ultra-low latency viewing experience for live streamed meetings is now available also from Google Meet room hardware. Live streaming is a critical tool for large audiences, such as town-halls or keynote events. Support for room hardware means that users can join and watch live streams together in smaller or larger groups. To view a live stream via Google Meet hardware, invite the room to a view-only calendar event granted that your host has allowed guests to modify events. When the event is about to start, the live stream will be visible with its name as an upcoming event in the room agenda. Join the live stream by tapping it on the touch screen.


Meeting hosts and meeting organizers can invite rooms directly in view-only calendar events — visit the Help Center to learn more about live streaming a video meeting. If the calendar event is locked for editing, individual users can also duplicate the event and create their own view-only copy with the rooms they want to add as viewing locations. Visit the Help Center to learn more about viewing a live stream.


Getting started

Rollout pace


Availability

  • Available to Google Workspace Enterprise Plus, Education Plus, and Enterprise Essentials Plus customers*

*Note: The ultra-low latency live streaming experience is rolling out at a slower pace for some customers. Once you receive the experience, you’ll be able to take advantage of these updates.


Resources


Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 129 (129.0.6668.9) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Stable Channel Update for Desktop

The Chrome team is delighted to announce the promotion of Chrome 128 to the stable channel for Windows, Mac and Linux. This will roll out over the coming days/weeks.


Chrome 128.0.6613.84 (Linux) 128.0.6613.84/.85( Windows, Mac) contains a number of fixes and improvements -- a list of changes is available in the log. Watch out for upcoming Chrome and Chromium blog posts about new features and big efforts delivered in 126.

Chrome 128.0.6613.84( Windows, Mac) has been pushed to extended stable channel as well

 Security Fixes and Rewards

Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.


This update includes 38 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[$36000][358296941] High CVE-2024-7964: Use after free in Passwords. Reported by Anonymous on 2024-08-08

[$11000][356196918] High CVE-2024-7965: Inappropriate implementation in V8. Reported by TheDog on 2024-07-30

[$10000][355465305] High CVE-2024-7966: Out of bounds memory access in Skia. Reported by Renan Rios (@HyHy100) on 2024-07-25

[$7000][355731798] High CVE-2024-7967: Heap buffer overflow in Fonts. Reported by Tashita Software Security on 2024-07-27

[$1000][349253666] High CVE-2024-7968: Use after free in Autofill. Reported by Han Zheng (HexHive) on 2024-06-25

[TBD][351865302] High CVE-2024-7969: Type Confusion in V8. Reported by CFF of Topsec Alpha Team on 2024-07-09

[TBD][360700873] High CVE-2024-7971: Type confusion in V8. Reported by Microsoft Threat Intelligence Center (MSTIC), Microsoft Security Response Center (MSRC) on 2024-08-19

[$11000][345960102] Medium CVE-2024-7972: Inappropriate implementation in V8. Reported by Simon Gerst (intrigus-lgtm) on 2024-06-10

[$7000][345518608] Medium CVE-2024-7973: Heap buffer overflow in PDFium. Reported by soiax on 2024-06-06

[$3000][339141099] Medium CVE-2024-7974: Insufficient data validation in V8 API. Reported by bowu(@gocrashed) on 2024-05-07

[$3000][347588491] Medium CVE-2024-7975: Inappropriate implementation in Permissions. Reported by Thomas Orlita on 2024-06-16

[$2000][339654392] Medium CVE-2024-7976: Inappropriate implementation in FedCM. Reported by Alesandro Ortiz on 2024-05-10

[$1000][324770940] Medium CVE-2024-7977: Insufficient data validation in Installer. Reported by Kim Dong-uk (@justlikebono) on 2024-02-11

[$1000][40060358] Medium CVE-2024-7978: Insufficient policy enforcement in Data Transfer. Reported by NDevTK on 2022-07-21

[TBD][356064205] Medium CVE-2024-7979: Insufficient data validation in Installer. Reported by VulnNoob on 2024-07-29

[TBD][356328460] Medium CVE-2024-7980: Insufficient data validation in Installer. Reported by VulnNoob on 2024-07-30

[$1000][40067456] Low CVE-2024-7981: Inappropriate implementation in Views. Reported by Thomas Orlita on 2023-07-14

[$500][350256139] Low CVE-2024-8033: Inappropriate implementation in WebApp Installs. Reported by Lijo A.T on 2024-06-30

[$500][353858776] Low CVE-2024-8034: Inappropriate implementation in Custom Tabs. Reported by Bharat (mrnoob) on 2024-07-18

[TBD][40059470] Low CVE-2024-8035: Inappropriate implementation in Extensions. Reported by Microsoft on 2022-04-26


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

Google is aware that an exploit for CVE-2024-7971 exists in the wild.


As usual, our ongoing internal security work was responsible for a wide range of fixes:

  • [361165957] Various fixes from internal audits, fuzzing and other initiatives


Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.



Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Prudhvikumar Bommana
Google Chrome

Announcing v202408 of the Google Ad Manager API

We're pleased to announce that v202408 of the Google Ad Manager API is available starting today, August 21, 2024. This release brings support for setting contextual targeting with ContentLabelTargeting and VerticalTargeting. It also adds AdsTxtService for reading MCM supply chain diagnostics.

In reporting, the VIDEO_PLACEMENT_NAME dimension has been replaced by the VIDEO_PLCMT_NAME dimension which reflects the updated IAB definition.

For the full list of changes, check the release notes. Contact us on the Ad Manager API forum with any API-related questions.

Fluent Bit WriteAPI Connector: Lowering the barrier to streaming data

Automating ingestion processes is crucial for modern businesses that handle vast amounts of data daily. In today's fast-paced digital landscape, the ability to seamlessly collect, process, and analyze data can make the difference between staying ahead of the competition and falling behind. To simplify ingestion, tools such as Fluent Bit enable customers to route data between pluggable sources and sinks without needing to write a single line of code. Instead, data routing is managed via a config file. The Fluent Bit WriteAPI Connector is a pluggable sink built on top of the BigQuery Storage Write API that enables organizations to rapidly develop a data ingestion pipeline.


What are the BigQuery Storage Write API and Fluent Bit?

The BigQuery Storage Write API is a high-performance data-ingestion API for BigQuery. It leverages both batching and streaming methods to ingest records into BigQuery in real-time. The WriteAPI offers features such as ability to scale and provides exactly-once delivery to guarantee that data is not duplicated. Using the Write API directly typically requires technical expertise, as users must navigate one of the client SDKs. This can create a high barrier to entry for some customers to stream data into BigQuery.

Fluent Bit is a widely-used open-source observability agent known for its lightweight design, speed, and flexibility. It operates by collecting logs, traces and metrics through various inputs such as local or network files, filtering and buffering them, and then routing them to designated outputs. Fluent Bit's high-performance parsing capabilities allow for data to be processed according to user specifications. The output component is a configurable plugin that directs data to different destinations, such as various tables in BigQuery. There can be multiple WriteAPI outputs and each output can be independently configured to use a specific write mode, enabling seamless data streaming into BigQuery based on tag/match pairs.


Why Use the Fluent Bit WriteAPI Connector?

Our solution to the technical challenges posed by using the WriteAPI is the Fluent Bit WriteAPI Connector. This connector automates the data ingestion process, eliminating the need for customers to write any code. The entire pipeline is managed through a single configuration file, making it easy to use. The flow of data is depicted in the diagram below.

Fluent Bit Flow Diagram

Example Use Case

Say we wish to monitor a log file containing JSON data, and we would like to ingest this data into a BigQuery table that has a single column titled “Text” of type String. A line from the log file looks like this:

{"Text": "Hello, World"}

Setup Process

    1. Setting Up Fluent Bit: The first step is to install and configure Fluent Bit. Once installed, Fluent Bit must be configured to collect data from your desired sources. This involves defining inputs, such as log files or system metrics, that Fluent Bit will monitor. This is explained below.
    2. Cloning the Google Git Repository: Next, clone the Google Git Repository that contains the Fluent Bit WriteAPI Connector. This repository includes all the necessary files to set up the connector, along with an example configuration file to help you get started. Let’s say the git repo is cloned at /usr/local/fluentbit-bigquery-writeapi-sink. Edit the file in the git repo named plugins.conf to provide the full path to the writeapi plugin. For example, the contents of the file can now look like this: 
    [PLUGINS]
      Path    /usr/local/fluentbit-bigquery-writeapi-sink/out_writeapi.so 
    3. Setting Up BigQuery Tables: Ensure that your BigQuery tables are set up and ready to receive data. This might involve creating new tables or configuring existing ones to match the data schema you intend to use. For example, create the BigQuery table with a schema containing the column Text of type STRING. Let’s say the table is created at myProject.myDataset.myTable.
Destination table schema
click to enlarge

    4. Prepare the input file: We will be reading data from a log file at /usr/local/logfile.log. Let’s start with an empty log file. Create the log file as follows: 
    touch /usr/local/logfile.log
    5. Configuring the Plugin: The most critical step is setting up the configuration file for the Fluent Bit WriteAPI Connector. This singular file controls the entire data pipeline, from input collection to data filtering and routing. The configuration file is straightforward and highly intuitive. It allows you to define various parameters, such as input sources, data filters, and output destinations. Create a configuration file in, say /usr/local, and call it demo.conf. See details on how to format a configuration file. It looks like this:
      Sample Config File

This routes the data from /usr/local/logfile.log to the BigQuery table at myProject.myDataset.myTable. There are additional configurable fields that control the stream, such as chunking, asynchronous response queue, and also the type of stream. These fields let you control how your data is streamed.

To run the pipeline, use the command:

fluent-bit -c /usr/local/demo.conf

As the log file is updated new lines will automatically appear in the BigQuery table. For example, to populate the log file you can run the following command:

echo "{\"Text\": \"Hello, world\"}" >> /usr/local/logfile.log

Note that the default flush interval in Fluent Bit is 1 minute, so it might take a minute before the log file is flushed. The BigQuery table will now be updated as follows:

Populated BigQuery table
click to enlarge

Key Features

The connector supports a wide variety of features including multi-instancing, dynamic scaling, exactly-once delivery, and automatic retry.

    1. Multi-Instancing

    • The multi-instancing feature of the Fluent Bit WriteAPI Connector is designed to offer flexibility in routing data. Specifically, users can configure the connector to handle multiple data inputs and outputs in various combinations. This feature also supports more complex configurations, such as multiple inputs feeding into multiple outputs, allowing data to be aggregated or distributed as needed. An input connector is labeled with a tag field. In our example, this has value log1. Data is routed to an output connector based on the value of its match field. In our example, this also has value log1, meaning there is a 1-to-1 correspondence between the input and output connector. The match field is a regex so it can be used to connect with multiple inputs. For example, if this was set to * then data from all inputs would flow to this output.

    2. Dynamic Scaling

    • Handling large volumes of data efficiently is crucial for modern pipelines. The dynamic scaling feature addresses the issue of potential overloads in the Write API. As data is streamed into BigQuery, there may be times when the API queue becomes full—by default, it can hold up to 1000 pending responses. When this limit is reached, no new data can be appended until some of the pending responses are processed, which can create back pressure in the system. To manage this, the connector automatically scales up its capacity by creating an additional network connection when it detects that the number of pending responses has reached the threshold.

    3. Exactly-Once

    • The "exactly-once" feature ensures that each piece of data is sent and recorded in BigQuery exactly once. This feature ensures no data is duplicated. If the connector encounters an intermittent issue while sending a specific piece of data, it will synchronously retry sending it until it is successful. This ensures data is delivered correctly.

    4. Retry Functionality

    • The retry functionality allows the connector to handle temporary failures gracefully. The retry mechanism is configurable, meaning users can set how many times the system should attempt to resend the data before giving up. By default, the connector will retry sending failed data up to four times. In the default stream mode, if a row of data fails to send, it is retried while other rows continue to be processed. However, in the "exactly once" mode, the retry process is synchronous, meaning the system will wait for the failed row to be successfully sent before moving on to subsequent rows.

    5. Error Handling

    • Error handling in the connector is designed to catch and manage issues that may arise during data transmission. The connector will continue processing incoming data even if earlier data had a failure. Any permanent issues that are encountered are logged to the console.

Conclusion

The ability to efficiently collect, process, and analyze data is a critical factor for business success. The Fluent Bit WriteAPI Connector stands out as a powerful solution that simplifies and automates the data ingestion process, bridging the gap between Fluent Bit's versatile data collection capabilities and Google BigQuery's robust analytics platform.

By eliminating the need for complex coding and manual data management, the Fluent Bit WriteAPI Connector lowers the barrier to entry for businesses of all sizes. Whether you're a small startup or a large enterprise, this tool allows you to effortlessly set up and manage your data pipelines with a single configuration file. Its features like multi-instancing, dynamic scaling, exactly-once delivery, and error handling ensure that your data is ingested accurately, reliably, and in real-time.

The straightforward setup process, combined with the flexibility and scalability of the connector, make it a valuable asset for any organization looking to harness the power of their data. By automating the ingestion process, businesses can focus on what truly matters: deriving actionable insights from their data to drive growth and innovation.

Less Is More: Principles for Simple Comments

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.

By David Bendory

Simplicity is the ultimate sophistication. — Leonardo da Vinci

You’re staring at a wall of code resembling a Gordian knot of Klingon. What’s making it worse? A sea of code comments so long that you’d need a bathroom break just to read them all! Let’s fix that.

  • Adopt the mindset of someone unfamiliar with the project to ensure simplicity. One approach is to separate the process of writing your comments from reviewing them; proofreading your comments without code context in mind helps ensure they are clear and concise for future readers.

  • Use self-contained comments to clearly convey intent without relying on the surrounding code for context. If you need to read the code to understand the comment, you’ve got it backwards!

Not self-contained; requires reading the code

Suggested alternative

// Respond to flashing lights in // rearview mirror.

// Pull over for police and/or yield to

// emergency vehicles.

while flashing_lights_in_rearview_mirror() {

  move_to_slower_lane() || stop_on_shoulder();

}

  • Include only essential information in the comments and leverage external references to reduce cognitive load on the reader. For comments suggesting improvements, links to relevant bugs or docs keep comments concise while providing a path for follow-up. Note that linked docs may be inaccessible, so use judgment in deciding how much context to include directly in the comments.

Too much potential improvement in the comment

Suggested alternative

// The local bus offers good average- // case performance. Consider using // the subway which may be faster

// depending on factors like time of // day, weather, etc.

// TODO: Consider various factors to // present the best transit option.

// See issuetracker.fake/bus-vs-subway

commute_by_local_bus();

  • Avoid extensive implementation details in function-level comments. When implementations change, such details often result in outdated comments. Instead, describe the public API contract, focusing on what the function does.

Too much implementation detail

Suggested alternative

// For high-traffic intersections // prone to accidents, pass through // the intersection and make 3 right // turns, which is equivalent to // turning left.

// Perform a safe left turn at a

// high-traffic intersection.

// See discussion in

// dangerous-left-turns.fake/about.

fn safe_turn_left() {

  go_straight();

  for i in 0..3 {

    turn_right();

  }

}