Lakewood now online! GFiber starts sign-ups today.


Colorado, here we come! Starting today, residents in the Eiber neighborhood of Lakewood can sign up to get connected to high-speed internet from GFiber to meet their families’ internet needs, whatever their online lives require. 

Thumbnail

We’re marking the occasion with an ice cream social at The Ice Cream Farm. This afternoon many Lakewood residents braved the rain to come by and get the scoop on GFiber while enjoying some delicious frozen treats on us. 

Lakewood Mayor Wendi Strom stops by our event at The Ice Cream Farm.

Lakewood customers will be able to choose any GFiber plan — 1 Gig for $70/month, 2 Gig for $100/month, 5 Gig for $125/month or 8 Gig for $150/month —  all with symmetrical uploads and downloads and equipment and installation included at no additional cost, no annual contracts and no data caps. Local businesses can sign up for GFiber for Business, offering Business 2 Gig for $250/month or Business 1 Gig for $100/month. 


This is just the next step for GFiber in the Denver metro area. GFiber Webpass has been available in Denver since 2017. We’re continuing to build our network out across Lakewood. As new segments are complete, we’ll open service in those neighborhoods. Additionally, we’re set to start construction in Westminster soon, and actively working on design and permitting for Golden, Wheat Ridge, and additional parts of Adams County.  For the latest on our construction progress and service availability in Colorado, sign up here.

Posted by Andy Simpson, General Manager Central Manager







Chrome for Android Update

Hi, everyone! We've just released Chrome 128 (128.0.6613.127) for Android . It'll become available on Google Play over the next few days. 

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Android releases contain the same security fixes as their corresponding Desktop (Windows & Mac: 128.0.6613.119/120 and Linux:128.0.6613.119) unless otherwise noted.


Harry Souders
Google Chrome

Beta Channel Update for ChromeOS / ChromeOS Flex

The Beta channel is being updated to OS version: 16002.17.0, Browser version: 129.0.6668.28 for most ChromeOS devices.

If you find new issues, please let us know one of the following ways:

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome
  4. Interested in switching channels? Find out how.

Matt Nelson,

Google ChromeOS

Targeting and pacing changes in the Display & Video 360 API and Structured Data Files

In the coming months, the following updates to the Display & Video 360 product might impact your integration with the Display & Video 360 API and Structured Data Files:

For details on how to prepare for each of these changes, read the rest of this blog post and check the Display & Video 360 API Announced Deprecations page.

Changes to content targeting for YouTube & partners line items

On September 30, 2024, the majority of Digital Content Label and Other Content Types exclusion targeting will no longer be available for YouTube & partners line items.

In Display & Video 360 API, all TARGETING_TYPE_DIGITAL_CONTENT_LABEL_EXCLUSION (excluding CONTENT_RATING_TIER_FAMILIES values) and TARGETING_TYPE_SENSITIVE_CATEGORY_EXCLUSION AssignedTargetingOptions will be removed from YouTube & partners line items. This will impact responses from assigned targeting LIST requests and attempts to retrieve these resources using advertisers.lineItems.targetingTypes.assignedTargetingOptions.get will return a 404 error.

In Structured Data Files, Line Item files will no longer use the following values in the “TrueView Category Exclusions Targeting” column:

  • “Embedded Videos”
  • “Live Streaming”
  • “All Audiences”
  • “Younger Teens”
  • “Teens”
  • “Adults”
  • “Not Yet Rated”

Generated files will no longer populate these values in the column. Line Item file entries for YouTube & partners line items using these configurations will fail on upload.

To avoid any interruption of service, remove any impacted targeting from YouTube & partners line items using the UI or Structured Data Files upload.

Sunset of Oracle first- and third-party audiences

On September 30, 2024, first- and third-party audiences sourced from Oracle will sunset. Once sunset, these audiences will be removed from any resource targeting and combined audiences. This update will automatically pause any line items that target only sunset audiences, or negatively target any sunset audiences.

You can identify sunsetting third-party audiences in the UI as third-party audiences from providers “Bluekai”, “Datalogix”, and “AddThis”. If you have an external account link with Oracle to import audiences from their platform, check with your relevant team to identify sunsetting first-party audiences. These audiences can’t be easily identified using the Display & Video 360 API.

In the Display & Video 360 API, TARGETING_TYPE_AUDIENCE_GROUP AssignedTargetingOptions will be updated to remove sunset audiences. Requests to add sunset audiences to resource targeting will return a 400 error.

In Structured Data Files, IDs of sunset audiences will no longer be included in “Audience Targeting - Include” or “Audience Targeting - Exclude” columns in Insertion Order, Line Item, and YouTube Ad Group files, as well as the “Bid Multiplier” column in Line Item files. File entries using sunset audience IDs in these columns will fail on upload.

To avoid any interruption of service, review the audiences used in your resource targeting, identify any Oracle audiences, and remove them. If you cache audience ID, make sure to remove any of these audiences from your cache.

Ineligibility of Optimized Targeting for certain bid strategies

On September 30, 2024, line items using the following bid strategies will no longer be able to use optimized targeting:

  • Maximum views with in-view time over 10 seconds
  • Maximum completed in-view and audible views
  • Maximum viewable impressions
  • Target viewable CPM

At this time, line items that use one of these bid strategies combined with optimized targeting will be updated to turn off optimized targeting.

In the Display & Video 360 API, LineItem resources with targetingExpansion.enableOptimizedTargeting set to True and bidStrategy.maximizeSpendAutoBid.performanceGoalType set to BIDDING_STRATEGY_PERFORMANCE_GOAL_TYPE_CIVA, BIDDING_STRATEGY_PERFORMANCE_GOAL_TYPE_IVO_TEN, or BIDDING_STRATEGY_PERFORMANCE_GOAL_TYPE_AV_VIEWED or bidStrategy.performanceGoalAutoBid.performanceGoalType set to BIDDING_STRATEGY_PERFORMANCE_GOAL_TYPE_VIEWABLE_CPM will be updated to set targetingExpansion.enableOptimizedTargeting to False. Requests creating or updating LineItem resources with any of these sunset configurations will return a 400 error.

In Structured Data Files, Line Item file entries with either “Optimized vCPM” in the “Bid Strategy Type” column or a combination of “Maximum” in the “Bid Strategy Type” column and “CIVA”, “IVO_TEN”, or “AV_VIEWED” in the “Bid Strategy Unit” column will be updated, if needed, to set the “Optimized Targeting” column to False. Line Item file entries using the sunset configurations will fail on upload.

To avoid any interruption of service, update and verify that your line items using these bid strategies don’t have optimized targeting turned on.

Sunset of “Flight ASAP” pacing for insertion orders

On November 5, 2024, “Flight ASAP” pacing will sunset for insertion orders. All existing Insertion Orders with “Flight ASAP” pacing will be updated to “Flight Ahead” pacing.

In the Display & Video 360 API, InsertionOrder resources with a pacing.pacingPeriod of PACING_PERIOD_FLIGHT and a pacing.pacingType of PACING_TYPE_ASAP will be updated to use a pacing.pacingType of PACING_TYPE_AHEAD. Requests creating or updating InsertionOrder resources with this configuration will return a 400 error.

In Structured Data Files, Insertion Order file entries with “Flight” and “ASAP” values in “Pacing” and “Pacing Rate” columns, respectively, will be updated to an “Ahead” value in the “Pacing Rate” column. Insertion Order file entries using the sunset configuration will fail on upload.

To avoid any interruption of service, update the pacing of any existing insertion orders currently using the “Flight ASAP” configuration.

If you run into issues or have questions about these changes, please contact us using our new Display & Video 360 API Technical support contact form.

Gemini (gemini.google.com) now shows related content links in its responses

What’s changing 

Starting today, you can access additional information on topics directly in Gemini’s (gemini.google.com) responses to your prompts. Specifically, you’ll see links to related content in responses to fact-seeking prompts — you can click the arrow chips to dive deeper into the topic. If you have a Gemini for Workspace license and Google Workspace extensions in Gemini are enabled, Gemini will also now include inline links to relevant emails referenced in responses where the Gmail extension is used. 


Including this information offers another easy way to dig deeper into Gemini’s responses. You can also use Gemini’s double-check feature to verify responses by using Google Search to highlight which statements are corroborated or contradicted on the web.


Additional details

  • At this time, this feature is limited to English prompts only.
  • This feature is available in most countries where Gemini (gemini.google.com) is available

Getting started

Rollout pace


Availability

  • Available to any Google Workspace users with access to gemini.google.com.
  • Users with a Gemini Business, Enterprise, Education, Education Premium add-on license will see Gmail citations if Google Workspace extensions in Gemini are enabled by their admin.

Deploying Rust in Existing Firmware Codebases

Android's use of safe-by-design principles drives our adoption of memory-safe languages like Rust, making exploitation of the OS increasingly difficult with every release. To provide a secure foundation, we’re extending hardening and the use of memory-safe languages to low-level firmware (including in Trusty apps).


In this blog post, we'll show you how to gradually introduce Rust into your existing firmware, prioritizing new code and the most security-critical code. You'll see how easy it is to boost security with drop-in Rust replacements, and we'll even demonstrate how the Rust toolchain can handle specialized bare-metal targets.


Drop-in Rust replacements for C code are not a novel idea and have been used in other cases, such as librsvg’s adoption of Rust which involved replacing C functions with Rust functions in-place. We seek to demonstrate that this approach is viable for firmware, providing a path to memory-safety in an efficient and effective manner.

Memory Safety for Firmware

Firmware serves as the interface between hardware and higher-level software. Due to the lack of software security mechanisms that are standard in higher-level software, vulnerabilities in firmware code can be dangerously exploited by malicious actors. Modern phones contain many coprocessors responsible for handling various operations, and each of these run their own firmware. Often, firmware consists of large legacy code bases written in memory-unsafe languages such as C or C++. Memory unsafety is the leading cause of vulnerabilities in Android, Chrome, and many other code bases.


Rust provides a memory-safe alternative to C and C++ with comparable performance and code size. Additionally it supports interoperability with C with no overhead. The Android team has discussed Rust for bare-metal firmware previously, and has developed training specifically for this domain.

Incremental Rust Adoption

Our incremental approach focusing on replacing new and highest risk existing code (for example, code which processes external untrusted input) can provide maximum security benefits with the least amount of effort. Simply writing any new code in Rust reduces the number of new vulnerabilities and over time can lead to a reduction in the number of outstanding vulnerabilities.


You can replace existing C functionality by writing a thin Rust shim that translates between an existing Rust API and the C API the codebase expects. The C API is replicated and exported by the shim for the existing codebase to link against. The shim serves as a wrapper around the Rust library API, bridging the existing C API and the Rust API. This is a common approach when rewriting or replacing existing libraries with a Rust alternative.

Challenges and Considerations

There are several challenges you need to consider before introducing Rust to your firmware codebase. In the following section we address the general state of no_std Rust (that is, bare-metal Rust code), how to find the right off-the-shelf crate (a rust library), porting an std crate to no_std, using Bindgen to produce FFI bindings, how to approach allocators and panics, and how to set up your toolchain.

The Rust Standard Library and Bare-Metal Environments

Rust's standard library consists of three crates: core, alloc, and std. The core crate is always available. The alloc crate requires an allocator for its functionality. The std crate assumes a full-blown operating system and is commonly not supported in bare-metal environments. A third-party crate indicates it doesn’t rely on std through the crate-level #![no_std] attribute. This crate is said to be no_std compatible. The rest of the blog will focus on these.

Choosing a Component to Replace

When choosing a component to replace, focus on self-contained components with robust testing. Ideally, the components functionality can be provided by an open-source implementation readily available which supports bare-metal environments.


Parsers which handle standard and commonly used data formats or protocols (such as, XML or DNS) are good initial candidates. This ensures the initial effort focuses on the challenges of integrating Rust with the existing code base and build system rather than the particulars of a complex component and simplifies testing. This approach eases introducing more Rust later on.

Choosing a Pre-Existing Crate (Rust Library)

Picking the right open-source crate (Rust library) to replace the chosen component is crucial. Things to consider are:

  • Is the crate well maintained, for example, are open issues being addressed and does it use recent crate versions?

  • How widely used is the crate? This may be used as a quality signal, but also important to consider in the context of using crates later on which may depend on it.

  • Does the crate have acceptable documentation?

  • Does it have acceptable test coverage?


Additionally, the crate should ideally be no_std compatible, meaning the standard library is either unused or can be disabled. While a wide range of no_std compatible crates exist, others do not yet support this mode of operation – in those cases, see the next section on converting a std library to no_std.


By convention, crates which optionally support no_std will provide an std feature to indicate whether the standard library should be used. Similarly, the alloc feature usually indicates using an allocator is optional.


Note: Even when a library declares #![no_std] in its source, there is no guarantee that its dependencies don’t depend on std. We recommend looking through the dependency tree to ensure that all dependencies support no_std, or test whether the library compiles for a no_std target. The only way to know is currently by trying to compile the crate for a bare-metal target.



For example, one approach is to run cargo check with a bare-metal toolchain provided through rustup:

$ rustup target add aarch64-unknown-none

$ cargo check --target aarch64-unknown-none --no-default-features


Porting a std Library to no_std

If a library does not support no_std, it might still be possible to port it to a bare-metal environment – especially file format parsers and other OS agnostic workloads. Higher-level functionality such as file handling, threading, and async code may present more of a challenge. In those cases, such functionality can be hidden behind feature flags to still provide the core functionality in a no_std build.

To port a std crate to no_std (core+alloc):

  • In the cargo.toml file, add a std feature, then add this std feature to the default features

  • Add the following lines to the top of the lib.rs:

#![no_std]


#[cfg(feature = "std")]

extern crate std;

extern crate alloc;

Then, iteratively fix all occurring compiler errors as follows:

  1. Move any use directives from std to either core or alloc.

  2. Add use directives for all types that would otherwise automatically be imported by the std prelude, such as alloc::vec::Vec and alloc::string::String.

  3. Hide anything that doesn't exist in core or alloc and cannot otherwise be supported in the no_std build (such as file system accesses) behind a #[cfg(feature = "std")] guard.

  4. Anything that needs to interact with the embedded environment may need to be explicitly handled, such as functions for I/O. These likely need to be behind a #[cfg(not(feature = "std"))] guard.

  5. Disable std for all dependencies (that is, change their definitions in Cargo.toml, if using Cargo).

This needs to be repeated for all dependencies within the crate dependency tree that do not support no_std yet.

Custom Target Architectures

There are a number of officially supported targets by the Rust compiler, however, many bare-metal targets are missing from that list. Thankfully, the Rust compiler lowers to LLVM IR and uses an internal copy of LLVM to lower to machine code. Thus, it can support any target architecture that LLVM supports by defining a custom target.


Defining a custom target requires a toolchain built with the channel set to dev or nightly. Rust’s Embedonomicon has a wealth of information on this subject and should be referred to as the source of truth. 


To give a quick overview, a custom target JSON file can be constructed by finding a similar supported target and dumping the JSON representation:


$ rustc --print target-list

[...]

armv7a-none-eabi

[...]


$ rustc -Z unstable-options --print target-spec-json --target armv7a-none-eabi


This will print out a target JSON that looks something like:

$ rustc --print target-spec-json -Z unstable-options --target=armv7a-none-eabi

{

  "abi": "eabi",

  "arch": "arm",

  "c-enum-min-bits": 8,

  "crt-objects-fallback": "false",

  "data-layout": "e-m:e-p:32:32-Fi8-i64:64-v128:64:128-a:0:32-n32-S64",

  [...]

}


This output can provide a starting point for defining your target. Of particular note, the data-layout field is defined in the LLVM documentation.


Once the target is defined, libcore and liballoc (and libstd, if applicable) must be built from source for the newly defined target. If using Cargo, building with -Z build-std accomplishes this, indicating that these libraries should be built from source for your target along with your crate module:

# set build-std to the list of libraries needed

cargo build -Z build-std=core,alloc --target my_target.json

Building Rust With LLVM Prebuilts

If the bare-metal architecture is not supported by the LLVM bundled internal to the Rust toolchain, a custom Rust toolchain can be produced with any LLVM prebuilts that support the target.


The instructions for building a Rust toolchain can be found in detail in the Rust Compiler Developer Guide. In the config.toml, llvm-config must be set to the path of the LLVM prebuilts.


You can find the latest Rust Toolchain supported by a particular version of LLVM by checking the release notes and looking for releases which bump up the minimum supported LLVM version. For example, Rust 1.76 bumped the minimum LLVM to 16 and 1.73 bumped the minimum LLVM to 15. That means with LLVM15 prebuilts, the latest Rust toolchain that can be built is 1.75.

Creating a Drop-In Rust Shim

To create a drop-in replacement for the C/C++ function or API being replaced, the shim needs two things: it must provide the same API as the replaced library and it must know how to run in the firmware’s bare-metal environment.

Exposing the Same API

The first is achieved by defining a Rust FFI interface with the same function signatures.


We try to keep the amount of unsafe Rust as minimal as possible by putting the actual implementation in a safe function and exposing a thin wrapper type around.


For example, the FreeRTOS coreJSON example includes a JSON_Validate C function with the following signature:

JSONStatus_t JSON_Validate( const char * buf, size_t max );


We can write a shim in Rust between it and the memory safe serde_json crate to expose the C function signature. We try to keep the unsafe code to a minimum and call through to a safe function early:

#[no_mangle]

pub unsafe extern "C" fn JSON_Validate(buf: *const c_char, len: usize) -> JSONStatus_t {

    if buf.is_null() {

        JSONStatus::JSONNullParameter as _

    } else if len == 0 {

        JSONStatus::JSONBadParameter as _

    } else {

        json_validate(slice_from_raw_parts(buf as _, len).as_ref().unwrap()) as _

    }

}


// No more unsafe code in here.

fn json_validate(buf: &[u8]) -> JSONStatus {

    if serde_json::from_slice::<Value>(buf).is_ok() {

        JSONStatus::JSONSuccess

    } else {

        ILLEGAL_DOC

    }

}



Note: This is a very simple example. For a highly resource constrained target, you can avoid alloc and use serde_json_core, which has even lower overhead but requires pre-defining the JSON structure so it can be allocated on the stack.



For further details on how to create an FFI interface, the Rustinomicon covers this topic extensively.

Calling Back to C/C++ Code

In order for any Rust component to be functional within a C-based firmware, it will need to call back into the C code for things such as allocations or logging. Thankfully, there are a variety of tools available which automatically generate Rust FFI bindings to C. That way, C functions can easily be invoked from Rust.


The standard means of doing this is with the Bindgen tool. You can use Bindgen to parse all relevant C headers that define the functions Rust needs to call into. It's important to invoke Bindgen with the same CFLAGS as the code in question is built with, to ensure that the bindings are generated correctly.


Experimental support for producing bindings to static inline functions is also available.

Hooking Up The Firmware’s Bare-Metal Environment

Next we need to hook up Rust panic handlers, global allocators, and critical section handlers to the existing code base. This requires producing definitions for each of these which call into the existing firmware C functions.


The Rust panic handler must be defined to handle unexpected states or failed assertions. A custom panic handler can be defined via the panic_handler attribute. This is specific to the target and should, in most cases, either point to an abort function for the current task/process, or a panic function provided by the environment.


If an allocator is available in the firmware and the crate relies on the alloc crate, the Rust allocator can be hooked up by defining a global allocator implementing GlobalAlloc.


If the crate in question relies on concurrency, critical sections will need to be handled. Rust's core or alloc crates do not directly provide a means for defining this, however the critical_section crate is commonly used to handle this functionality for a number of architectures, and can be extended to support more.


It can be useful to hook up functions for logging as well. Simple wrappers around the firmware’s existing logging functions can expose these to Rust and be used in place of print or eprint and the like. A convenient option is to implement the Log trait.

Fallible Allocations and alloc

Rusts alloc crate normally assumes that allocations are infallible (that is, memory allocations won’t fail). However due to memory constraints this isn’t true in most bare-metal environments. Under normal circumstances Rust panics and/or aborts when an allocation fails; this may be acceptable behavior for some bare-metal environments, in which case there are no further considerations when using alloc.


If there’s a clear justification or requirement for fallible allocations however, additional effort is required to ensure that either allocations can’t fail or that failures are handled. 


One approach is to use a crate that provides statically allocated fallible collections, such as the heapless crate, or dynamic fallible allocations like fallible_vec. Another is to exclusively use try_* methods such as Vec::try_reserve, which check if the allocation is possible.


Rust is in the process of formalizing better support for fallible allocations, with an experimental allocator in nightly allowing failed allocations to be handled by the implementation. There is also the unstable cfg flag for alloc called no_global_oom_handling which removes the infallible methods, ensuring they are not used.

Build Optimizations

Building the Rust library with LTO is necessary to optimize for code size. The existing C/C++ code base does not need to be built with LTO when passing -C lto=true to rustc. Additionally, setting -C codegen-unit=1 results in further optimizations in addition to reproducibility. 


If using Cargo to build, the following Cargo.toml settings are recommended to reduce the output library size:


[profile.release]

panic = "abort"

lto = true

codegen-units = 1

strip = "symbols"


# opt-level "z" may produce better results in some circumstances

opt-level = "s" 


Passing the -Z remap-cwd-prefix=. flag to rustc or to Cargo via the RUSTFLAGS env var when building with Cargo to strip cwd path strings.


In terms of performance, Rust demonstrates similar performance to C. The most relevant example may be the Rust binder Linux kernel driver, which found “that Rust binder has similar performance to C binder”.


When linking LTO’d Rust staticlibs together with C/C++, it’s recommended to ensure a single Rust staticlib ends up in the final linkage, otherwise there may be duplicate symbol errors when linking. This may mean combining multiple Rust shims into a single static library by re-exporting them from a wrapper module.

Memory Safety for Firmware, Today

Using the process outlined in this blog post, You can begin to introduce Rust into large legacy firmware code bases immediately. Replacing security critical components with off-the-shelf open-source memory-safe implementations and developing new features in a memory safe language will lead to fewer critical vulnerabilities while also providing an improved developer experience.


Special thanks to our colleagues who have supported and contributed to these efforts: Roger Piqueras Jover, Stephan Chen, Gil Cukierman, Andrew Walbran, and Erik Gilling