Early Stable Update for Desktop

 The Stable channel has been updated to 117.0.5938.48 for Windows and Mac as part of our early stable release to a small percentage of users. A full list of changes in this build is available in the log.


You can find more details about early Stable releases here.

Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Prudhvikumar Bommana

Google Chrome

Chrome Beta for Desktop Update

The Beta channel has been updated to 117.0.5938.48 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvi Bommana
Google Chrome

TSMixer: An all-MLP architecture for time series forecasting

Time series forecasting is critical to various real-world applications, from demand forecasting to pandemic spread prediction. In multivariate time series forecasting (forecasting multiple variants at the same time), one can split existing methods into two categories: univariate models and multivariate models. Univariate models focus on inter-series interactions or temporal patterns that encompass trends and seasonal patterns on a time series with a single variable. Examples of such trends and seasonal patterns might be the way mortgage rates increase due to inflation, and how traffic peaks during rush hour. In addition to inter-series patterns, multivariate models process intra-series features, known as cross-variate information, which is especially useful when one series is an advanced indicator of another series. For example, a rise in body weight may cause an increase in blood pressure, and increasing the price of a product may lead to a decrease in sales. Multivariate models have recently become popular solutions for multivariate forecasting as practitioners believe their capability of handling cross-variate information may lead to better performance.

In recent years, deep learning Transformer-based architectures have become a popular choice for multivariate forecasting models due to their superior performance on sequence tasks. However, advanced multivariate models perform surprisingly worse than simple univariate linear models on commonly-used long-term forecasting benchmarks, such as Electricity Transformer Temperature (ETT), Electricity, Traffic, and Weather. These results raise two questions:

  • Does cross-variate information benefit time series forecasting?
  • When cross-variate information is not beneficial, can multivariate models still perform as well as univariate models?

In “TSMixer: An All-MLP Architecture for Time Series Forecasting”, we analyze the advantages of univariate linear models and reveal their effectiveness. Insights from this analysis lead us to develop Time-Series Mixer (TSMixer), an advanced multivariate model that leverages linear model characteristics and performs well on long-term forecasting benchmarks. To the best of our knowledge, TSMixer is the first multivariate model that performs as well as state-of-the-art univariate models on long-term forecasting benchmarks, where we show that cross-variate information is less beneficial. To demonstrate the importance of cross-variate information, we evaluate a more challenging real-world application, M5. Finally, empirical results show that TSMixer outperforms state-of-the-art models, such as PatchTST, Fedformer, Autoformer, DeepAR and TFT.


TSMixer architecture

A key difference between linear models and Transformers is how they capture temporal patterns. On one hand, linear models apply fixed and time-step-dependent weights to capture static temporal patterns, and are unable to process cross-variate information. On the other hand, Transformers use attention mechanisms that apply dynamic and data-dependent weights at each time step, capturing dynamic temporal patterns and enabling them to process cross-variate information.

In our analysis, we show that under common assumptions of temporal patterns, linear models have naïve solutions to perfectly recover the time series or place bounds on the error, which means they are great solutions for learning static temporal patterns of univariate time series more effectively. In contrast, it is non-trivial to find similar solutions for attention mechanisms, as the weights applied to each time step are dynamic. Consequently, we develop a new architecture by replacing Transformer attention layers with linear layers. The resulting TSMixer model, which is similar to the computer vision MLP-Mixer method, alternates between applications of the multi-layer perceptron in different directions, which we call time-mixing and feature-mixing, respectively. The TSMixer architecture efficiently captures both temporal patterns and cross-variate information, as shown in the figure below. The residual designs ensure that TSMixer retains the capacity of temporal linear models while still being able to exploit cross-variate information.

Transformer block and TSMixer block architectures. TSMixer replaces the multi-head attention layer with time-mixing, a linear model applied on the time dimension.

Comparison between data-dependent (attention mechanisms) and time-step-dependent (linear models). This is an example of forecasting the next time step by learning the weights of the previous three time steps.


Evaluation on long-term forecasting benchmarks

We evaluate TSMixer using seven popular long-term forecasting datasets (ETTm1, ETTm2, ETTh1, ETTh2, Electricity, Traffic, and Weather), where recent research has shown that univariate linear models outperform advanced multivariate models with large margins. We compare TSMixer with state-of-the-art multivariate models (TFT, FEDformer, Autoformer, Informer), and univariate models, including linear models and PatchTST. The figure below shows the average improvement of mean squared error (MSE) by TSMixer compared with others. The average is calculated across datasets and multiple forecasting horizons. We demonstrate that TSMixer significantly outperforms other multivariate models and performs on par with state-of-the-art univariate models. These results show that multivariate models are capable of performing as well as univariate models.

The average MSE improvement of TSMixer compared with other baselines. The red bars show multivariate methods and the blue bars show univariate methods. TSMixer achieves significant improvement over other multivariate models and achieves comparable results to univariate models.


Ablation study

We performed an ablation study to compare TSMixer with TMix-Only, a TSMixer variant that consists of time mixing layers only. The results show that TMix-Only performs almost the same as TSMixer, which means the additional feature mixing layers do not improve the performance and confirms that cross-variate information is less beneficial on popular benchmarks. The results validate the superior univariate model performance shown in previous research. However, existing long-term forecasting benchmarks are not well representative of the need for cross-variate information in some real-world applications where time series may be intermittent or sparse, hence temporal patterns may not be sufficient for forecasting. Therefore, it may be inappropriate to evaluate multivariate forecasting models solely on these benchmarks.


Evaluation on M5: Effectiveness of cross-variate information

To further demonstrate the benefit of multivariate models, we evaluate TSMixer on the challenging M5 benchmark, a large-scale retail dataset containing crucial cross-variate interactions. M5 contains the information of 30,490 products collected over 5 years. Each product description includes time series data, like daily sales, sell price, promotional event information, and static (non-time-series) features, such as store location and product category. The goal is to forecast the daily sales of each product for the next 28 days, evaluated using the weighted root mean square scaled error (WRMSSE) from the M5 competition. The complicated nature of retail makes it more challenging to forecast solely using univariate models that focus on temporal patterns, so multivariate models with cross-variate information and even auxiliary features are more essential.

First, we compare TSMixer to other methods only considering the historical data, such as daily sales and historical sell prices. The results show that multivariate models outperforms univariate models significantly, indicating the usefulness of cross-variate information. And among all compared methods, TSMixer effectively leverages the cross-variate information and achieves the best performance.

Additionally, to leverage more information, such as static features (e.g., store location, product category) and future time series (e.g., a promotional event scheduled in coming days) provided in M5, we propose a principle design to extend TSMixer. The extended TSMixer aligns different types of features into the same length, and then applies multiple mixing layers to the concatenated features to make predictions. The extended TSMixer architecture outperforms models popular in industrial applications, including DeepAR and TFT, showcasing its strong potential for real-world impact.

The architecture of the extended TSMixer. In the first stage (align stage), it aligns the different types of features into the same length before concatenating them. In the second stage (mixing stage) it applies multiple mixing layers conditioned with static features.

The WRMSSE on M5. The first three methods (blue) are univariate models. The middle three methods (orange) are multivariate models that consider only historical features. The last three methods (red) are multivariate models that consider historical, future, and static features.


Conclusion

We present TSMixer, an advanced multivariate model that leverages linear model characteristics and performs as well as state-of-the-art univariate models on long-term forecasting benchmarks. TSMixer creates new possibilities for the development of time series forecasting architectures by providing insights into the importance of cross-variate and auxiliary information in real-world scenarios. The empirical results highlight the need to consider more realistic benchmarks for multivariate forecasting models in future research. We hope that this work will inspire further exploration in the field of time series forecasting, and lead to the development of more powerful and effective models that can be applied to real-world applications.


Acknowledgements

This research was conducted by Si-An Chen, Chun-Liang Li, Nate Yoder, Sercan O. Arik, and Tomas Pfister.

Source: Google AI Blog


GSoC 2023: project results and feedback part 1



In 2023, Google Summer of Code brought 966 new contributors into open source software development to work with open source organizations on a 12+ week project. We had 168 participating open source organizations with mentors and contributors from over 75 countries this year.

For 19 years, Google Summer of Code has thrived due to the enthusiasm of our open source communities and the 19k+ volunteer mentors that spend from 50-150 hours mentoring each of our 20k contributors since 2005! This year, there are 168 mentoring organizations and over 1,950 mentors participating in the 2023 program. A sincere thank you to our mentors and organization administrators for guiding and supporting our contributors this year. We are also looking forward to hosting many of the 2023 GSoC Mentors on campus this fall for the annual Mentor Summit.

September 4th concluded the standard 12-week project timeline and we are pleased to announce that 628 contributors have successfully completed this year’s program as of today, September 5th, 2023. Congratulations to all the contributors and mentors that have wrapped up their summer coding projects!

2023 has shown us that GSoC continues to grow in popularity with students and developers 19 years after the program began. GSoC had a record high 5,679 contributor applicants from 106 countries submit their project proposals this year. We also had huge interest in the program with over 43,765 registrants from 160 countries applying to the program during the two week application period.

The final step of every GSoC program is to hear back from mentors and contributors on their experiences through evaluations. This helps GSoC Admins continuously improve the program and gives us a chance to see the impact the program has on so many individuals! Some notable results and comments from the standard 12-week project length evaluations are below:

  • 95.63% of contributors think that GSoC helped their programming skills
  • 99.06% of contributors would recommend their GSoC mentors
  • 97.81% of contributors will continue working with their GSoC organization
  • 99.84% of contributors plan to continue working on open source
  • 82.81% of contributors said they would consider being a mentor
  • 96.25% of contributors said they would apply to GSoC again

Here’s some of what our GSoC 2023 Contributors had to say about the program!


At the suggestion of last year’s contributors, we added multiple live talks throughout the coding period to bring contributors together, providing tips to help them make the most of their GSoC experience. Each of these talks were attended on average by 42% of the 2023 GSoC contributors.

Another request from our previous contributors was to hear more about the cool projects their colleagues did over the summer and the opportunity to talk about their own projects with others. Over the coming weeks we are hosting three lightning talk sessions where over 40 of the 2023 contributors will have the opportunity to present their project learnings to the other contributors and their mentors.

We’ll be back in a couple of months to give a final update on the GSoC projects that will conclude later this year. Almost 30% of contributors (286 contributors) are still completing their projects, so please stay tuned for their results in part two of this blog post later this year!

By Perry Burnham – Google Open Source

Introducing a new way to discover Display & Video 360 bulk tools

Today we’re announcing a new guide to help Display & Video 360 users discover available bulk tools that they can use to optimize their integrations.

The guide offers a high-level overview of each of the following tools that allow you to integrate with Display & Video 360 at scale:

The guide also provides a recommendations page that can help you choose the right bulk tool based on your needs and circumstances and a page proposing potential platform-wide integrations using multiple bulk tools.

You can navigate to this new guide from the existing Display & Video 360 API, Bid Manager API, or Reporting Data Transfer documentation using the Discover Bulk Tools tab at the top of the page.

Supporting Black tech entrepreneurs through the fourth Google for Startups Accelerator: Black Founders program

Posted by Lauren O’Neil, Startup Developer Ecosystem Lead, and Matt Ridenour, Head of US Startup Ecosystem

We are thrilled to announce our latest cohort of the Google for Startups Accelerator: Black Founders program as it embarks on its fourth year serving Black founders in the U.S. and Canada.

The 12 companies selected for this year’s cohort reflect the trends of the broader application pool - startups focused on improving healthcare outcomes, protecting the environment, reducing consumer energy consumption, and removing barriers to financial resources and home ownership (just to name a few). Additionally, these companies are utilizing emerging AI technologies to streamline and simplify customer, consumer, and professional experiences at all levels.

"This year's cohort represents the massive opportunity that Google has to invest in the future of tech entrepreneurship, and how Google supports a broader ecosystem of driving innovation in key industries. It’s truly impressive to see how this cohort is tackling some of the world’s toughest problems, from energy to medicine to finance, and enabling the creator economy for games, music, and content."  
– Jeanine Banks, VP & General Manager, Developer X and Head of Developer Relations.


Hear from a few founders who will participate in this 10-week program, commencing September 26th.

Tell us the story of your startup:

Seyi Adesola, Cofounder & CEO of AfroHealth: “Losing my mom to a preventable illness ignited my journey into healthcare, leading me to become a professional healthcare practitioner while providing individual health coaching to my church community, family and friends. AfroHealth was formed as an expansion of this vision, an online platform to provide Black individuals with culturally-sensitive online health coaching.”

Nana Wilberforce, Founder & CEO of Akeptus: “In the United States alone, one-third of households grapple with monthly energy bills, with 20% on the brink of losing access, and this hardship disproportionately affects minority groups. Akeptus was founded to empower households and enterprises to control their energy costs via AI solutions that simplify energy management.”

Nicole Clay, Cofounder & CMO of Hue: “My co-founders and I came together as three women across the skin tone spectrum who struggled with representation in beauty and finding products for our unique complexions. We are an e-commerce technology company that matches shoppers to real people who share the same skin tone, skin type, or preferences as you.”

What are the primary technical challenges you’re hoping to address during the program?

Seyi: “During the program, our first priority is perfecting the integration of Artificial Intelligence with our platform. We hope to utilize the full potential of Google's ML and TensorFlow frameworks to improve health outcomes in the Afro community.”

Nana: “We're most excited about the accelerator for the hands-on Cloud and AI expertise to refine our algorithms and infrastructure, allowing us to scale our impact on sustainability.”

Nicole: “During the program, we are looking to apply AI/ML to create and optimize video content, and leverage AI to ease the process for everyday end-users to create their own video reviews.”

Learn more about all 12 participating startups below:

AfroHealth (Dallas, TX) is a digital health & wellness platform utilizing AI to provide personalized healthcare coaching to Black and Brown communities.

Akeptus (Glenwood, MD) is an AI-powered energy management platform that provides real-time insights and control to optimize usage and energy costs, reduce waste, and strengthen grid resilience.

CareCopilot (New York, NY) is a curated marketplace of key services that families need when caring for elderly loved ones.

eBanqo (Alpharetta, GA) is a customer engagement AI platform that empowers businesses of all sizes to provide instant and seamless service to their customers across all channels, 24/7.

Expedier (Hamilton, ON) is the first Black-led, Black-Owned & BIPOC facing digital bank in Canada serving six million underserved BIPOC Canadians. (learn more about Expedier on our Google Canada blog!)

Hue (San Francisco, CA) is an AI-powered video platform that helps brands generate and display short-form video reviews on e-commerce.

IndyGeneUS (Washington, D.C.) is a precision medicine company using next-generation sequencing technologies to identify unique gene variants in diseases that affect underrepresented populations.

Kwema (St. Louis, MO) is a smart badge reel for healthcare professionals that empowers clinicians to unobtrusively call for help when facing patient violence.

My Home Pathway (New York, NY) is a technology platform that guides first-time home buyers to approval faster by analyzing data and providing individualized recommendations.

Pagedip (Boulder, CO) is a no-code content publishing app that allows users to create beautifully efficient, powerfully effective and demonstrably measurable documents that work better for teams and their customers.

Plannly Health (Scottsdale, AZ) is a patent-pending risk management software dedicated to mitigating the risk of human errors in hospitals, by offering a digital health solution that addresses provider stress, burnout, and critical life events or changes.

Rivet (Chicago, IL) is an AI-driven platform that helps creator teams use machine learning to find and understand their high-potential fans and provides actions and automations to unlock more revenue from them.

Find more information at g.co/blackfoundersaccelerator.

Use Abstraction to Improve Function Readability

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.


By Palak Bansal and Mark Manley


Which version of the createPizza function below is easier to understand?

func createPizza(order *Order) *Pizza {  

  pizza := &Pizza{Base: order.Size,

                  Sauce: order.Sauce,

                  Cheese: “Mozzarella”}


  if order.kind == “Veg” {

    pizza.Toppings = vegToppings

  } else if order.kind == “Meat” {

    pizza.Toppings = meatToppings

  }


  oven := oven.New()

  if oven.Temp != cookingTemp { 

    for (oven.Temp < cookingTemp) {

      time.Sleep(checkOvenInterval)

      oven.Temp = getOvenTemp(oven)

    }

  }


  if !pizza.Baked {

    oven.Insert(pizza)

    time.Sleep(cookTime)

    oven.Remove(pizza)

    pizza.Baked = true

  }


  box := box.New()

  pizza.Boxed = box.PutIn(pizza)

  pizza.Sliced = box.SlicePizza(order.Size)

  pizza.Ready = box.Close()

  return pizza  

}

func createPizza(order *Order) *Pizza {

  pizza := prepare(order)

  bake(pizza)

  box(pizza)

  return pizza

}


func prepare(order *Order) *Pizza {

  pizza := &Pizza{Base: order.Size,

                  Sauce: order.Sauce,

                  Cheese: “Mozzarella”}

  addToppings(pizza, order.kind)

  return pizza

}


func addToppings(pizza *Pizza, kind string) {

  if kind == “Veg” {

    pizza.Toppings = vegToppings

  } else if kind == “Meat” {

    pizza.Toppings = meatToppings

  }

}


func bake(pizza *Pizza) {

  oven := oven.New()

  heatOven(oven) 

  bakePizza(pizza, oven)

}


func heatOven(oven *Oven) { … }

func bakePizza(pizza *Pizza, oven *Oven) { … }

func box(pizza *Pizza) { … }

You probably said the right-hand side is easier, but why? The left side mixes together several levels of abstraction: low-level implementation details (e.g., how to heat the oven), intermediate-level functions (e.g., how to bake pizza), and high-level abstractions (e.g., preparing, baking, and boxing the pizza).

The right side is easier to follow because the functions have a consistent level of abstraction, providing a top-down narrative of the code’s logic. createPizza is a high-level function that delegates the preparing, baking, and boxing steps to lower-level specialized functions with intuitive names. Those functions, in turn, delegate to their own lower-level specialized functions (e.g., heatOven) until they reach a function that handles implementation details without needing to call other functions. 

Avoid mixing different abstraction layers into a single function. Nest functions at equal abstraction levels to provide a narrative. This self-documenting style is simpler to follow, debug, and reuse.

You can learn more about this topic in the book Clean Code by Robert C. Martin.