Announcing the deprecation of Ad Exchange Buyer II API

The Ad Exchange Buyer II API is now deprecated, and the following resources will be sunset on September 29th, 2023:

Accessing the sunset resources will return an error response following the sunset date. Note that RTB Troubleshooting resources aren’t included in this list, and will continue to be supported until further notice.

To continue programmatically accessing your Client Access and Marketplace configurations, you should migrate to the Authorized Buyers Marketplace API. For more information, we recommend that you review the Marketplace guide and samples. If you have any questions or feedback, feel free to reach out to us via the Authorized Buyers API Forum.

Google Workspace Updates Weekly Recap – March 10, 2023

1 New update

Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers.


Additional details for the updated Gmail experience on Android foldable devices and tablets 
Last month, we announced an improved Chat, Meet, and Gmail experience on Android foldable devices and tablets. To provide additional details on functionality, Gmail supports a 2-pane view in landscape orientation only. | This is now available. | Learn more


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Indicate your Google Voice availability with more options
We’re making it easier for you to indicate your availability in Google Voice on web and mobile. Previously, you could only indicate your availability for all of your ring groups. This update gives you greater flexibility to manage your Ring Group availability in Google Voice, without the need to sign-out of Voice or block all incoming calls using “Do Not Disturb”. | Available to Google Voice Standard and Premier customers only. | Learn more


Refreshed interface for Google Drive, Google Docs, Google Sheets, and Google Slides
In the coming weeks, you’ll notice a new look and feel for Google Drive, Docs, Sheets, and Slides on the web. Following the release of Google Material Design 3, the refreshed user interface is purposefully designed to streamline core collaboration journeys across our products. | Learn more


New updates for Google Meet on Poly Android-based appliances
We are rolling out updates to Google Meet to support our upcoming launch of Google Meet on Poly Android-based appliances. Within the Google admin console, admins can enroll Poly devices and include reporting of these new appliances. The Google Meet hardware experience will become available in the upcoming Poly OS 4.0 update as part of the Poly Studio X series family. | Learn more.


Managed Android devices must upgrade to Android Device Policy during March 2023
In 2019, we announced that a new Android management client, Android Device Policy, would replace the legacy Google Apps Device Policy client. We’re now in the final stages of this upgrade. All devices with the Google Apps Device Policy will lose access during March 2023 if they have not already upgraded. | Learn more


Completed rollouts

The features below completed their rollouts to Rapid Release domainsScheduled Release domains, or both. Please refer to the original blog post for additional details.


Withings reduces 50% of its data sync code by streamlining health and fitness API integrations with Health Connect

Posted by the Android team

French consumer electronics company Withings hosts one of the largest ecosystems of digital health and wellness products in the world. The company’s products include smart watches, smart scales, blood pressure monitors, and its own health-tracking application. Formerly known as Health Mate, the Withings application gives Withings users an easy way to track all of their health information—like activity, weight, ECG records, and sleep—obtained from Withings devices.

While Withings works to create a central hub for its users to access their health-related data, the number of devices and applications for monitoring health has grown substantially. And as health and fitness data spread across multiple platforms, it can be difficult for users to easily track and analyze this information.

To extend access to additional metrics and give Withings users a chance to use the application with their non-Withings apps and devices, Withings integrated Health Connect, Android’s latest API offering that gives users a simpler way to consolidate and share their health and fitness data across applications.

Health data is more powerful together

Before integrating Health Connect, Withings users had to manually activate which health and fitness apps could sync data to and from the Withings app. Now, thanks to Health Connect, its users can grant permissions to new health and fitness applications and automatically sync their data to the Withings app, letting them find their data in one easy-to-manage place.

“We integrated Health Connect in Withings app to grow our health sphere and offer a more complete experience to our users by supporting a wider range of data,” said Sophie Zecri, a mobile software engineer at Withings. “Health Connect helped us create a richer health-tracking interface and a more efficient overview for users.”

By uniting health data using the Health Connect API, Withings application offers its users a more holistic view of their health and makes it easier to develop a deeper understanding of key health insights with the data they gather.

For instance, Withings’ users can now combine their other workout- or calorie-tracking applications with the Withings app. By doing this, users can more easily track how changes in one area of their health may be affecting another. Additionally, the Withings app can provide greater guidance and more specialized programs to meet each user's unique needs, such as specific dietary recommendations and recipes or more specialized exercise programs.

Withings also wanted the data available through the Withings application to be accessible in its users' other health and fitness apps. Integrating with Health Connect made this possible. “We wanted to extend access to additional metrics, giving our users a chance to use Withings devices with their other applications,” said Sophie.

Ensuring that users felt in control of their data was also a top priority for the Withings team. They saw Health Connect as a powerful tool that’s equally secure for both Withings and its users. With Health Connect, users can easily manage permissions in one place, with granular controls to see which apps are accessing data at any given time. And for Withings, setting up permission checks was as easy as dropping in a simple piece of code provided by Health Connect.

Simplify connectivity between apps with Health Connect

The amount of work required to connect with other third-party health and fitness applications was Withings’ biggest roadblock to giving its users access to additional syncs. All the APIs for every other app, with all their unique code, made integrations complex and expensive for Withings to maintain.

“Connecting with other apps’ APIs was onerous. Any changes had to be repeated for every API, which meant expanding the codebase and increasing the risk of bugs that could impact Withings app’s quality,” said Sophie.

Health Connect lets Withings developers maintain less code while preserving stability and minimizing potential bugs. This translates to a reduced codebase and increased productivity for other projects. By integrating Health Connect with the Withings app, Withings reduced the amount of code related to data sync with third-party applications by 50%.

Headshot of Sophie Zecri, Mobile Software Engineer at Withings, with quote, 'Integrating Health Connect was really rewarding for us. We're thrilled we can enrich the user experience by generating true synergy, letting users  dive deeper into the details of their health aspects.'

Preparing for a future with Health Connect

The Withings team attributes much of its success to the available Android resources for developers looking to integrate Health Connect with their company’s app. Withings developers used the Health Connect UX developer guide to aid the integration, and they used the Health Connect toolbox for testing and to understand how Withings app behaves with other applications that have integrated Health Connect.

The Withings team is excited to support new data types as its product range grows and new biomarker measurements become available. Currently, the company plans to expand its use of Health Connect by adding more data types related to women’s health.

“I would recommend Health Connect to other engineers looking to unite data for its users,” said Sophie. “Health Connect is a powerful, interesting, and easy tool to use.”

Join the many other apps using Health Connect today

Streamline integrations with other health and fitness apps while providing your users with deeper health insights using Health Connect.

Get started by viewing Android’s Introduction to Health Connect. Then, head over to the Health Connect Codelab and learn how you can integrate the Health Connect API today.

PaLM-E: An embodied multimodal language model

Recent years have seen tremendous advances across machine learning domains, from models that can explain jokes or answer visual questions in a variety of languages to those that can produce images based on text descriptions. Such innovations have been possible due to the increase in availability of large scale datasets along with novel advances that enable the training of models on these data. While scaling of robotics models has seen some success, it is outpaced by other domains due to a lack of datasets available on a scale comparable to large text corpora or image datasets.

Today we introduce PaLM-E, a new generalist robotics model that overcomes these issues by transferring knowledge from varied visual and language domains to a robotics system. We began with PaLM, a powerful large language model, and “embodied” it (the “E” in PaLM-E), by complementing it with sensor data from the robotic agent. This is the key difference from prior efforts to bring large language models to robotics — rather than relying on only textual input, with PaLM-E we train the language model to directly ingest raw streams of robot sensor data. The resulting model not only enables highly effective robot learning, but is also a state-of-the-art general-purpose visual-language model, while maintaining excellent language-only task capabilities.




An embodied  language model, and also a visual-language generalist

On the one hand, PaLM-E was primarily developed to be a model for robotics, and it solves a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). At the same time, PaLM-E is a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code.

PaLM-E combines our most recent large language model, PaLM, together with one of our most advanced vision models, ViT-22B. The largest instantiation of this approach, built on PaLM-540B, is called PaLM-E-562B and sets a new state of the art on the visual-language OK-VQA benchmark, without task-specific fine-tuning, and while retaining essentially the same general language performance as PaLM-540B.


How does PaLM-E work?

Technically, PaLM-E works by injecting observations into a pre-trained language model. This is realized by transforming sensor data, e.g., images, into a representation through a procedure that is comparable to how words of natural language are processed by a language model.

Language models rely on a mechanism to represent text mathematically in a way that neural networks can process. This is achieved by first splitting the text into so-called tokens that encode (sub)words, each of which is associated with a high-dimensional vector of numbers, the token embedding. The language model is then able to apply mathematical operations (e.g., matrix multiplication) on the resulting sequence of vectors to predict the next, most likely word token. By feeding the newly predicted word back to the input, the language model can iteratively generate a longer and longer text.

The inputs to PaLM-E are text and other modalities — images, robot states, scene embeddings, etc. — in an arbitrary order, which we call "multimodal sentences". For example, an input might look like, "What happened between <img_1> and <img_2>?", where <img_1> and <img_2> are two images. The output is text generated auto-regressively by PaLM-E, which could be an answer to a question, or a sequence of decisions in text form.

PaLM-E model architecture, showing how PaLM-E ingests different modalities (states and/or images) and addresses tasks through multimodal language modeling.

The idea of PaLM-E is to train encoders that convert a variety of inputs into the same space as the natural word token embeddings. These continuous inputs are mapped into something that resembles "words" (although they do not necessarily form discrete sets). Since both the word and image embeddings now have the same dimensionality, they can be fed into the language model.

We initialize PaLM-E for training with pre-trained models for both the language (PaLM) and vision components (Vision Transformer, a.k.a. ViT). All parameters of the model can be updated during training.


Transferring knowledge from large-scale training to robots

PaLM-E offers a new paradigm for training a generalist model, which is achieved by framing robot tasks and vision-language tasks together through a common representation: taking images and text as input, and outputting text. A key result is that PaLM-E attains significant positive knowledge transfer from both the vision and language domains, improving the effectiveness of robot learning.

Positive transfer of knowledge from general vision-language tasks results in more effective robot learning, shown for three different robot embodiments and domains.

Results show that PaLM-E can address a large set of robotics, vision and language tasks simultaneously without performance degradation compared to training individual models on individual tasks. Further, the visual-language data actually significantly improves the performance of the robot tasks. This transfer enables PaLM-E to learn robotics tasks efficiently in terms of the number of examples it requires to solve a task.


Results

We evaluate PaLM-E on three robotic environments, two of which involve real robots, as well as general vision-language tasks such as visual question answering (VQA), image captioning, and general language tasks. When PaLM-E is tasked with making decisions on a robot, we pair it with a low-level language-to-action policy to translate text into low-level robot actions.

In the first example below, a person asks a mobile robot to bring a bag of chips to them. To successfully complete the task, PaLM-E produces a plan to find the drawer and open it and then responds to changes in the world by updating its plan as it executes the task. In the second example, the robot is asked to grab a green block. Even though the block has not been seen by that robot, PaLM-E still generates a step-by-step plan that generalizes beyond the training data of that robot.

  
PaLM-E controls a mobile robot operating in a kitchen environment. Left: The task is to get a chip bag. PaLM-E shows robustness against adversarial disturbances, such as putting the chip bag back into the drawer. Right: The final steps of executing a plan to retrieve a previously unseen block (green star). This capability is facilitated by transfer learning from the vision and language models.

In the second environment below, the same PaLM-E model solves very long-horizon, precise tasks, such as “sort the blocks by colors into corners,” on a different type of robot. It directly looks at the images and produces a sequence of shorter textually-represented actions — e.g., “Push the blue cube to the bottom right corner,” “Push the blue triangle there too.” — long-horizon tasks that were out of scope for autonomous completion, even in our own most recent models. We also demonstrate the ability to generalize to new tasks not seen during training time (zero-shot generalization), such as pushing red blocks to the coffee cup.

  
PaLM-E controlling a tabletop robot to successfully complete long-horizon tasks.

The third robot environment is inspired by the field of task and motion planning (TAMP), which studies combinatorially challenging planning tasks (rearranging objects) that confront the robot with a very high number of possible action sequences. We show that with a modest amount of training data from an expert TAMP planner, PaLM-E is not only able to also solve these tasks, but it also leverages visual and language knowledge transfer in order to more effectively do so.

  
PaLM-E produces plans for a task and motion planning environment.

As a visual-language generalist, PaLM-E is a competitive model, even compared with the best vision-language-only models, including Flamingo and PaLI. In particular, PaLM-E-562B achieves the highest number ever reported on the challenging OK-VQA dataset, which requires not only visual understanding but also external knowledge of the world. Further, this result is reached with a generalist model, without fine-tuning specifically on only that task.

PaLM-E exhibits capabilities like visual chain-of-thought reasoning in which the model breaks down its answering process in smaller steps, an ability that has so far only been demonstrated in the language-only domain. The model also demonstrates the ability to perform inference on multiple images although being trained on only single-image prompts. The image of the New York Knicks and Boston Celtics is under the terms CC-by-2.0 and was posted to Flickr by kowarski. The image of Kobe Bryant is in the Public Domain. The other images were taken by us.

Conclusion

PaLM-E pushes the boundaries of how generally-capable models can be trained to simultaneously address vision, language and robotics while also being capable of transferring knowledge from vision and language to the robotics domain. There are additional topics investigated in further detail in the paper, such as how to leverage neural scene representations with PaLM-E and also the extent to which PaLM-E, with greater model scale, experiences less catastrophic forgetting of its language capabilities.

PaLM-E not only provides a path towards building more capable robots that benefit from other data sources, but might also be a key enabler to other broader applications using multimodal learning, including the ability to unify tasks that have so far seemed separate.


Acknowledgements

This work was done in collaboration across several teams at Google, including the Robotics at Google team and the Brain team, and with TU Berlin. Co-authors: Igor Mordatch, Andy Zeng, Aakanksha Chowdhery, Klaus Greff, Mehdi S. M. Sajjadi, Daniel Duckworth, Corey Lynch, Ayzaan Wahid, Jonathan Tompson, Fei Xia, Brian Ichter, Karol Hausman, Tianhe Yu, Quan Vuong, Yevgen Chebotar, Wenlong Huang, Pierre Sermanet, Sergey Levine, Vincent Vanhoucke, and Marc Toussiant. Danny is a PhD student advised by Marc Toussaint at TU Berlin. We also would like to thank several other colleagues for their advice and help, including Xi Chen, Etienne Pot, Sebastian Goodman, Maria Attarian, Ted Xiao, Keerthana Gopalakrishnan, Kehang Han, Henryk Michalewski, Neil Houlsby, Basil Mustafa, Justin Gilmer, Yonghui Wu, Erica Moreira, Victor Gomes, Tom Duerig, Mario Lucic, Henning Meyer, and Kendra Byrne.

Source: Google AI Blog


Removing support for PHP 7 in the Google Ads API client library for PHP

In July 2023, the Google Ads API client library for PHP will start requiring PHP version 8.0 or higher. The version of the client library that adds support for the Google Ads API v14 will be the last version that supports PHP 7. We’ll still fix security issues for this client library version until the Google Ads API v14 is sunset, but no new features will be added.

All PHP 7 versions reached their end of life in 2022. The PHP development team no longer provides security fixes for these versions, so we highly recommend migrating to newer versions as soon as possible.

Here are useful resources to help with the PHP migration: If you have any questions regarding this change, feel free to comment directly on the GitHub issue.

Long Term Support (LTS) channel for ChromeOS – Major update from 102 -> 108

The Long Term Support Candidate has been promoted to ChromeOS LTS 108 and is rolling out to most ChromeOS devices. The current version is 108.0.5359.221 (Platform Version: 15183.8240).

If you are currently on the ChromeOS Long Term Support (LTS) channel (and not pinned to 102), your devices will automatically update from ChromeOS LTS 102 to ChromeOS LTS 108.

We are happy to announce that, starting with this version, ChromeOS LTS is supporting ChromeOS Flex.

Release notes for LTS-108 can be found here.


Giuliana Pritchard 

Google ChromeOS

Stable Channel Update for ChromeOS / ChromeOS Flex

The Stable channel is being updated to OS version: 15329.44.0 Browser version: 111.0.5563.71 for most ChromeOS devices.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

Security Fixes and Rewards

Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

[$3000] [1348791] Medium CVE-2023-1227 Security: Use-after-free in LaCros  Reported by @ginggilBesel

We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

Rescheduled changes to targeting expansion in Display & Video 360 API and Structured Data Files

On March 25, 2023, optimized targeting will gradually begin replacing targeting expansion for all display, video, and audio line items in Display & Video 360. By mid-April, optimized targeting will be live for all partners. These changes will be reflected in Display & Video 360 API and Structured Data Files (SDF), and might impact your existing implementation and the success of your requests.

Similar changes were previously announced and subsequently postponed late last year, but these rescheduled changes have some important differences. Please read this blog post in its entirety if you use the targetingExpansion field in the Display & Video 360 API or the “Audience Targeting - Similar Audiences” column in SDF to configure your Display & Video 360 line items.

Changes in the Display & Video 360 API

Optimized targeting will be configured using the existing targetingExpansion field and TargetingExpansionConfig object in Display & Video 360 API LineItem resources. Once optimized targeting replaces targeting expansion for your partner, these fields will be used to manage optimized targeting in the following ways:

  • The targetingExpansionLevel field will only support two possible values:
    • NO_EXPANSION: optimized targeting is off.
    • LEAST_EXPANSION:optimized targeting is on.
  • Optimized targeting will not automatically be turned on for eligible line items created or updated by the Display & Video 360 API. NO_EXPANSION will be the default value for the targetingExpansionLevel field and will be automatically assigned if you do not set the field.
  • If you set targetingExpansionLevel to one of the following values, it will automatically be reset to LEAST_EXPANSION:
    • SOME_EXPANSION
    • BALANCED_EXPANSION
    • MORE_EXPANSION
    • MOST_EXPANSION
  • excludeFirstPartyAudience will be deprecated. If set to true, it will automatically be reset to false.
  • If you turn on optimized targeting for an ineligible line item, the request will not return an error and the change will persist. However, you must update the line item to be eligible before it will use optimized targeting when serving.

We will also update the configurations of existing line items as follows:

To aid in this migration, we are adding a new LineItemWarningMessage value to Display & Video 360 API v1 and v2. The warning value DEPRECATED_FIRST_PARTY_AUDIENCE_EXCLUSION will be added to the warningMessages field of line items that have excludeFirstPartyAudience set to true.

Changes in the Structured Data Files

Targeting expansion is currently set using the “Audience Targeting - Similar Audiences” column in Line Item and Insertion Order SDF formats. These columns will be deprecated when optimized targeting replaces targeting expansion. When downloading SDFs, this column will always be set to FALSE and when uploading SDFs, the value of this column will be ignored.

Eligible line items created using SDF upload will have optimized targeting turned off by default. You will not be able to manage optimized targeting configurations using existing SDF versions and must instead use the Display & Video 360 UI or API.

Upcoming SDF v6 will support optimized targeting configurations. The SDF v6 launch will be announced on the Google Ads Developer Blog and in the SDF release notes.

Preparing for these changes

To prepare for these changes, we recommend the following:

  • Turn on automated bidding for line items currently using fixed bidding with targeting expansion before March 25, 2023 so that they are eligible for optimized targeting.
  • Update your Display & Video 360 API integration to directly exclude any first-party audiences using audience targeting before March 25, 2023 to account for the deprecation of the excludeFirstPartyAudience field.
  • Update your Structured Data Files integration to no longer use the “Audience Targeting - Similar Audiences” column in Line Item and Insertion Order SDFs before March 25, 2023 to account for the column deprecation.

If you have questions regarding these changes or need help with these new features, please contact us using our support contact form.