Early Stable Update for Android

Hi, everyone! We've just released Chrome 110 (110.0.5481.61) for Android to a small percentage of users: it'll become available on Google Play over the next few days. You can find more details about early Stable releases here.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Explore Looker data using Connected Sheets

What’s changing

We’re adding the ability to interactively explore modeled data from Looker, Google Cloud’s modern business intelligence platform, using Connected Sheets. This brings connectivity between the familiar interface of Google Sheets and the 50+ data sources available within Looker’s open ecosystem, including BigQuery, Cloud SQL, Snowflake, and Redshift. 

From a single source of truth, you can analyze data using pivot tables, charts, formulas, and other integrated data sources. Additionally, with this live connection, access is secured and your data will stay up to date. 

Getting started 

  • Admins: 
    • In order for users to take advantage of this feature, make sure Connected Sheets for Looker is enabled in Looker's Admin menu
    • Visit the Help Center to learn more about using Connected Sheets
  • End users: 
    • If enabled by your admin, follow these steps to explore Looker data using Connected Sheets: 
      • In a Google Sheet navigate to “Data” > “Data connectors” > “Connect to Looker” > enter in the URL of a Looker instance, for example: https://example.looker.com. You will then need to authorize Sheets to access your Looker data. After you connect to an Explore, you can see the available data and continue your analysis in Google Sheets. 
      • Visit the Help Center to learn more about Connected Sheets for Looker

Rollout pace 


Availability 

  • Available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Education Standard, the Teaching and Learning Upgrade, Frontline, and Nonprofits 
  • Not available legacy G Suite Basic and Business customers 
  • Available to users with personal Google Accounts 

Resources 

Google Sheets adds powerful new functions for advanced analysis

What’s changing

Last year, we launched Named functions, along with several powerful functions in Google Sheets, including LAMBDA and XLOOKUP. 

Today, we’re excited to announce 11 additional functions that will introduce new concepts, provide you with more efficient functions, and help with more advanced analysis: 
  • EPOCHTODATE: Converts a Unix epoch timestamp in seconds, milliseconds, or microseconds to a datetime in UTC. 
  • MARGINOFERROR: Calculates the amount of random sampling error given a range of values and a confidence level. 
  • TOROW: Transforms an array or range of cells into a single row. 
  • TOCOL: Transforms an array or range of cells into a single column. 
  • CHOOSEROWS: Creates a new array from the selected rows in the existing range. 
  • CHOOSECOLS: Creates a new array from the selected columns in the existing range. 
  • WRAPROWS: Wraps the provided row or column of cells by rows after a specified number of elements to form a new array. 
  • WRAPCOLS: Wraps the provided row or column of cells by columns after a specified number of elements to form a new array. 
  • VSTACK: Appends ranges vertically and in sequence to return a larger array. 
  • HSTACK: Appends ranges horizontally and in sequence to return a larger array. 
  • LET: Assigns name with the value_expression results and returns the result of the formula_expression. The formula_expression can use the names defined in the scope of the LET function. The value_expressions are evaluated only once in the LET function even if the following value_expressions or the formula_expression use them multiple times. 

Who’s impacted 

End users 


Why you’d use it 

In addition to providing advanced statistical calculations like margin of error, these functions provide new functionality in the form of: 
  • Powerful new ways to manipulate and work with arrays using TOROW, TOCOL, WRAPROWS, WRAPCOLS, CHOOSEROWS, CHOOSECOLS, VSTACK, and HSTACK. 
  • Converting Unix EPOCH dates to readable formats without the need for manual calculations using EPOCHTODATE. 
  • More efficient and simpler to read complex formulas that use LET to assign names to calculation results.

Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: In Google Sheets, all functions can be inserted using the function autocomplete after pressing the equal symbol, using the insert menu, or the function dropdown in the toolbar. 

Rollout pace 


Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers 
  • Available to users with personal Google Accounts 

Resources 

The Flan Collection: Advancing open source methods for instruction tuning

Language models are now capable of performing many new natural language processing (NLP) tasks by reading instructions, often that they hadn’t seen before. The ability to reason on new tasks is mostly credited to training models on a wide variety of unique instructions, known as “instruction tuning”, which was introduced by FLAN and extended in T0, Super-Natural Instructions, MetaICL, and InstructGPT. However, much of the data that drives these advances remain unreleased to the broader research community. 

In “The Flan Collection: Designing Data and Methods for Effective Instruction Tuning”, we closely examine and release a newer and more extensive publicly available collection of tasks, templates, and methods for instruction tuning to advance the community’s ability to analyze and improve instruction-tuning methods. This collection was first used in Flan-T5 and Flan-PaLM, for which the latter achieved significant improvements over PaLM. We show that training a model on this collection yields improved performance over comparable public collections on all tested evaluation benchmarks, e.g., a 3%+ improvement on the 57 tasks in the Massive Multitask Language Understanding (MMLU) evaluation suite and 8% improvement on BigBench Hard (BBH). Analysis suggests the improvements stem both from the larger and more diverse set of tasks and from applying a set of simple training and data augmentation techniques that are cheap and easy to implement: mixing zero-shot, few-shot, and chain of thought prompts at training, enriching tasks with input inversion, and balancing task mixtures. Together, these methods enable the resulting language models to reason more competently over arbitrary tasks, even those for which it hasn’t seen any fine-tuning examples. We hope making these findings and resources publicly available will accelerate research into more powerful and general-purpose language models.


Public instruction tuning data collections

Since 2020, several instruction tuning task collections have been released in rapid succession, shown in the timeline below. Recent research has yet to coalesce around a unified set of techniques, with different sets of tasks, model sizes, and input formats all represented. This new collection, referred to below as “Flan 2022”, combines prior collections from FLAN, P3/T0, and Natural Instructions with new dialog, program synthesis, and complex reasoning tasks.

A timeline of public instruction tuning collections, including: UnifiedQA, CrossFit, Natural Instructions, FLAN, P3/T0, MetaICL, ExT5, Super-Natural Instructions, mT0, Unnatural Instructions, Self-Instruct, and OPT-IML Bench. The table describes the release date, the task collection name, the model name, the base model(s) that were finetuned with this collection, the model size, whether the resulting model is Public (green) or Not Public (red), whether they train with zero-shot prompts (“ZS”), few-shot prompts (“FS”), chain-of-thought prompts (“CoT”) together (“+”) or separately (“/”), the number of tasks from this collection in Flan 2022, the total number of examples, and some notable methods, related to the collections, used in these works. Note that the number of tasks and examples vary under different assumptions and so are approximations. Counts for each are reported using task definitions from the respective works.

In addition to scaling to more instructive training tasks, The Flan Collection combines training with different types of input-output specifications, including just instructions (zero-shot prompting), instructions with examples of the task (few-shot prompting), and instructions that ask for an explanation with the answer (chain of thought prompting). Except for InstructGPT, which leverages a collection of proprietary data, Flan 2022 is the first work to publicly demonstrate the strong benefits of mixing these prompting settings together during training. Instead of a trade-off between the various settings, mixing prompting settings during training improves all prompting settings at inference time, as shown below for both tasks held-in and held-out from the set of fine-tuning tasks.

Training jointly with zero-shot and few-shot prompt templates improves performance on both held-in and held-out tasks. The stars indicate the peak performance in each setting. Red lines denote the zero-shot prompted evaluation, lilac denotes few-shot prompted evaluation.

Evaluating instruction tuning methods

To understand the overall effects of swapping one instruction tuning collection for another, we fine-tune equivalently-sized T5 models on popular public instruction-tuning collections, including Flan 2021, T0++, and Super-Natural Instructions. Each model is then evaluated on a set of tasks that are already included in each of the instruction tuning collections, a set of five chain-of-thought tasks, and then a set of 57 diverse tasks from the MMLU benchmark, both with zero-shot and few-shot prompts. In each case, the new Flan 2022 model, Flan-T5, outperforms these prior works, demonstrating a more powerful general-purpose NLP reasoner.

Comparing public instruction tuning collections on held-in, chain-of-thought, and held-out evaluation suites, such as BigBench Hard and MMLU. All models except OPT-IML-Max (175B) are trained by us, using T5-XL with 3B parameters. Green text indicates improvement over the next best comparable T5-XL (3B) model.

Single task fine-tuning

In applied settings, practitioners usually deploy NLP models fine-tuned specifically for one target task, where training data is already available. We examine this setting to understand how Flan-T5 compares to T5 models as a starting point for applied practitioners. Three settings are compared: fine-tuning T5 directly on the target task, using Flan-T5 without further fine-tuning on the target task, and fine-tuning Flan-T5 on the target task. For both held-in and held-out tasks, fine-tuning Flan-T5 offers an improvement over fine-tuning T5 directly. In some instances, usually where training data is limited for a target task, Flan-T5 without further fine-tuning outperforms T5 with direct fine-tuning.

Flan-T5 outperforms T5 on single-task fine-tuning. We compare single-task fine-tuned T5 (blue bars), single-task fine-tuned Flan-T5 (red), and Flan-T5 without any further fine-tuning (beige).

An additional benefit of using Flan-T5 as a starting point is that training is significantly faster and cheaper, converging more quickly than T5 fine-tuning, and usually peaking at higher accuracies. This suggests less task-specific training data may be necessary to achieve similar or better results on a particular task.

Flan-T5 converges faster than T5 on single-task fine-tuning, for each of five held-out tasks from Flan fine-tuning. Flan-T5’s learning curve is indicated with the solid lines, and T5’s learning curve with the dashed line. All tasks are held-out during Flan finetuning.

There are significant energy efficiency benefits for the NLP community to adopt instruction-tuned models like Flan-T5 for single task fine-tuning, rather than conventional non-instruction-tuned models. While pre-training and instruction fine-tuning are financially and computationally expensive, they are a one-time cost, usually amortized over millions of subsequent fine-tuning runs, which can become more costly in aggregate, for the most prominent models. Instruction-tuned models offer a promising solution in significantly reducing the amount of fine-tuning steps needed to achieve the same or better performance.


Conclusion

The new Flan instruction tuning collection unifies the most popular prior public collections and their methods, while adding new templates and simple improvements like training with mixed prompt settings. The resulting method outperforms Flan, P3, and Super-Natural Instructions on held-in, chain of thought, MMLU, and BBH benchmarks by 3–17% across zero-shot and few-shot variants. Results suggest this new collection serves as a more performant starting point for researchers and practitioners interested in both generalizing to new instructions or fine-tuning on a single new task.


Acknowledgements

It was a privilege to work with Jason Wei, Barret Zoph, Le Hou, Hyung Won Chung, Tu Vu, Albert Webson, Denny Zhou, and Quoc V Le on this project.

Source: Google AI Blog


Kickstarting your tech writing career with open source

After graduating from University in the midst of a pandemic, I knew that I wanted to be a tech writer, but I wasn’t sure how to start. Google Season of Docs was the perfect way to launch my career; it let me work on my own terms and led to me starting my own business and to subsequent tech writing jobs in open source. I am currently working as a tech writer at Google and volunteering for documentation-related open source projects.

Should you join an open source project?

The charm (and challenge) of open source is that the line between creators and users becomes blurred. Do you wish that your beloved tool had that one feature you really need? You can add it yourself! Other users might support your feature request and may even help you build it. Before you know it, you’re part of a wonderful community bound together by passion.

People join open source projects for many reasons:

  • They believe in the vision of a project and want to help build it
  • They want to build professional and technical skills
  • They are motivated by the possibility of hundreds—or even thousands—of people using their work

Life in open source as a tech writer

Many contributors in open source come from a software engineering background. They are great at building software, but they sometimes struggle with documentation. Through Google Season of Docs, open source projects can hire technical writers to help them create much-needed content. These technical writers are likely the first person in the project working exclusively on educational content—which comes with ups and downs.

The fun parts

As an open source technical writer, you will often be in close contact with your users. Through researching user needs, technical writers develop more empathy for the struggles of the users. Many tech writers (myself included) find that this closeness helps them write better.

Contributing to open source also allows you to create documentation in different contexts. For example, you might have authored content in a CMS in the past—diving into an open source project gives you the opportunity to explore a docs-as-code workflow. Another circumstance could be that you wrote documentation in a different industry and you want to see what it’s like to document software. Changing up your writing routine helps you find more creative ways to tackle problems for the next project you work on.

The hard parts

Documentation quality can be quite variable in open source. While some pages might be really useful, others might be outdated, don’t follow the user workflow, or cover way too much information on one page. Making sense of the existing documentation landscape can feel like a daunting task.

Most open source projects suffer from gaps in the documentation. Since open source developers are so enmeshed with their code and the project, they have a lot of context, and suffer from the “curse of knowledge”. It’s hard for contributors, or anyone who has held a position for a while, to remember what it was like to be a beginner or new to a project. When developers write documentation, their brain auto-completes what is missing on the page.

Because many people work on open source for personal satisfaction, you might experience pushback from people who are protective of their documentation. I find it helpful to view pushback as an act of caring about documentation. Take a closer look at why you are receiving pushback:

  • Do the developers have concerns about your technical understanding?
  • Are they not ready to let go of their document?
  • Do you have different ideas of who the user is and what their goals are?

Understanding developer concerns can help you reach the shared goal of improved documentation.

Succeeding in Google Season of Docs and beyond

These tips helped me make the most of my Google Season of Docs experience.

Gain clarity

Take time in the beginning of the project to really understand the software, the user’s needs, and your docs landscape. (I allocated one third of my entire project timeline to gaining clarity.) Talk to your project mentors, do user research, and perform a content audit—this will help you understand the current structure and identify weaknesses and gaps in the content.

Keep your community in the loop

Open source communities attract contributors from all over the world—which means communication is usually asynchronous and in writing. Transparent communication is a must to keep your users (and potential co-creators) engaged. When they know what’s going on, it’s easier for them to chip in.

Deal with pushbacks

Transparent communications and a solid documentation plan go a long way towards addressing concerns. It’s easier to receive support if your team knows what you’re doing.

Build a professional support network

Find other tech writers to geek out with, especially if you’re the only technical writer in your project. Groups like Write the Docs and The Good Docs project are good places to find like-minded people to brainstorm and learn with.

I hope you find a project that interests you and the bandwidth to participate in Google Season of Docs. It was a worthwhile experience for me, helped me advance in my career, and I hope the same for you.

P.S. You can find a detailed write up of my work for Season of Docs ‘21 on my website.

By Tina Luedtke, Technical Writer – Google

Kickstarting your tech writing career with open source

After graduating from University in the midst of a pandemic, I knew that I wanted to be a tech writer, but I wasn’t sure how to start. Google Season of Docs was the perfect way to launch my career; it let me work on my own terms and led to me starting my own business and to subsequent tech writing jobs in open source. I am currently working as a tech writer at Google and volunteering for documentation-related open source projects.

Should you join an open source project?

The charm (and challenge) of open source is that the line between creators and users becomes blurred. Do you wish that your beloved tool had that one feature you really need? You can add it yourself! Other users might support your feature request and may even help you build it. Before you know it, you’re part of a wonderful community bound together by passion.

People join open source projects for many reasons:

  • They believe in the vision of a project and want to help build it
  • They want to build professional and technical skills
  • They are motivated by the possibility of hundreds—or even thousands—of people using their work

Life in open source as a tech writer

Many contributors in open source come from a software engineering background. They are great at building software, but they sometimes struggle with documentation. Through Google Season of Docs, open source projects can hire technical writers to help them create much-needed content. These technical writers are likely the first person in the project working exclusively on educational content—which comes with ups and downs.

The fun parts

As an open source technical writer, you will often be in close contact with your users. Through researching user needs, technical writers develop more empathy for the struggles of the users. Many tech writers (myself included) find that this closeness helps them write better.

Contributing to open source also allows you to create documentation in different contexts. For example, you might have authored content in a CMS in the past—diving into an open source project gives you the opportunity to explore a docs-as-code workflow. Another circumstance could be that you wrote documentation in a different industry and you want to see what it’s like to document software. Changing up your writing routine helps you find more creative ways to tackle problems for the next project you work on.

The hard parts

Documentation quality can be quite variable in open source. While some pages might be really useful, others might be outdated, don’t follow the user workflow, or cover way too much information on one page. Making sense of the existing documentation landscape can feel like a daunting task.

Most open source projects suffer from gaps in the documentation. Since open source developers are so enmeshed with their code and the project, they have a lot of context, and suffer from the “curse of knowledge”. It’s hard for contributors, or anyone who has held a position for a while, to remember what it was like to be a beginner or new to a project. When developers write documentation, their brain auto-completes what is missing on the page.

Because many people work on open source for personal satisfaction, you might experience pushback from people who are protective of their documentation. I find it helpful to view pushback as an act of caring about documentation. Take a closer look at why you are receiving pushback:

  • Do the developers have concerns about your technical understanding?
  • Are they not ready to let go of their document?
  • Do you have different ideas of who the user is and what their goals are?

Understanding developer concerns can help you reach the shared goal of improved documentation.

Succeeding in Google Season of Docs and beyond

These tips helped me make the most of my Google Season of Docs experience.

Gain clarity

Take time in the beginning of the project to really understand the software, the user’s needs, and your docs landscape. (I allocated one third of my entire project timeline to gaining clarity.) Talk to your project mentors, do user research, and perform a content audit—this will help you understand the current structure and identify weaknesses and gaps in the content.

Keep your community in the loop

Open source communities attract contributors from all over the world—which means communication is usually asynchronous and in writing. Transparent communication is a must to keep your users (and potential co-creators) engaged. When they know what’s going on, it’s easier for them to chip in.

Deal with pushbacks

Transparent communications and a solid documentation plan go a long way towards addressing concerns. It’s easier to receive support if your team knows what you’re doing.

Build a professional support network

Find other tech writers to geek out with, especially if you’re the only technical writer in your project. Groups like Write the Docs and The Good Docs project are good places to find like-minded people to brainstorm and learn with.

I hope you find a project that interests you and the bandwidth to participate in Google Season of Docs. It was a worthwhile experience for me, helped me advance in my career, and I hope the same for you.

P.S. You can find a detailed write up of my work for Season of Docs ‘21 on my website.

By Tina Luedtke, Technical Writer – Google

Taking the next step: OSS-Fuzz in 2023

Since launching in 2016, Google's free OSS-Fuzz code testing service has helped get over 8800 vulnerabilities and 28,000 bugs fixed across 850 projects. Today, we’re happy to announce an expansion of our OSS-Fuzz Rewards Program, plus new features in OSS-Fuzz and our involvement in supporting academic fuzzing research.

Refreshed OSS-Fuzz rewards

The OSS-Fuzz project's purpose is to support the open source community in adopting fuzz testing, or fuzzing — an automated code testing technique for uncovering bugs in software. In addition to the OSS-Fuzz service, which provides a free platform for continuous fuzzing to critical open source projects, we established an OSS-Fuzz Reward Program in 2017 as part of our wider Patch Rewards Program.

We’ve operated this successfully for the past 5 years, and to date, the OSS-Fuzz Reward Program has awarded over $600,000 to over 65 different contributors for their help integrating new projects into OSS-Fuzz.

Today, we’re excited to announce that we’ve expanded the scope of the OSS-Fuzz Reward Program considerably, introducing many new types of rewards!

These new reward types cover contributions such as:

  • Project fuzzing coverage increases
  • Notable FuzzBench fuzzer integrations
  • Integrating a new sanitizer (example) that finds two new vulnerabilities

These changes boost the total rewards possible per project integration from a maximum of $20,000 to $30,000 (depending on the criticality of the project). In addition, we’ve also established two new reward categories that reward wider improvements across all OSS-Fuzz projects, with up to $11,337 available per category.

For more details, see the fully updated rules for our dedicated OSS-Fuzz Reward Program.

OSS-Fuzz improvements

We’ve continuously made improvements to OSS-Fuzz’s infrastructure over the years and expanded our language offerings to cover C/C++, Go, Rust, Java, Python, and Swift, and have introduced support for new frameworks such as FuzzTest. Additionally, as part of an ongoing collaboration with Code Intelligence, we’ll soon have support for JavaScript fuzzing through Jazzer.js.

FuzzIntrospector support

Last year, we launched the OpenSSF FuzzIntrospector tool and integrated it into OSS-Fuzz.

We’ve continued to build on this by adding new language support and better analysis, and now C/C++, Python, and Java projects integrated into OSS-Fuzz have detailed insights on how the coverage and fuzzing effectiveness for a project can be improved.

The FuzzIntrospector tool provides these insights by identifying complex code blocks that are blocked during fuzzing at runtime, as well as suggesting new fuzz targets that can be added. We’ve seen users successfully use this tool to improve the coverage of jsonnet, file, xpdf and bzip2, among others.

Anyone can use this tool to increase the coverage of a project and in turn be rewarded as part of the refreshed OSS-Fuzz rewards. See the full list of all OSS-Fuzz FuzzIntrospector reports to get started.

Fuzzing research and competition

The OSS-Fuzz team maintains FuzzBench, a service that enables security researchers in academia to test fuzzing improvements against real-world open source projects. Approaching its third anniversary in serving free benchmarking, FuzzBench is cited by over 100 papers and has been used as a platform for academic fuzzing workshops such as NDSS’22.

This year, FuzzBench has been invited to participate in the SBFT'23 workshop in ICSE, a premier research conference in the field, which for the first time is hosting a fuzzing competition. During this competition, the FuzzBench platform will be used to evaluate state-of-the-art fuzzers submitted by researchers from around the globe on both code coverage and bug-finding metrics.

Get involved!

We believe these initiatives will help scale security testing efforts across the broader open source ecosystem. We hope to accelerate the integration of critical open source projects into OSS-Fuzz by providing stronger incentives to security researchers and open source maintainers. Combined with our involvement in fuzzing research, these efforts are making OSS-Fuzz an even more powerful tool, enabling users to find more bugs, and, more critically, find them before the bad guys do!

3 things to expect at the Google for Games Developer Summit

Posted by Greg Hartrell, Product Director, Games on Play/Android

Save the date for this year’s virtual Google for Games Developer Summit, happening on March 14 at 9 a.m. PT. You’ll hear about product updates and discover new ways to build great games, connect with players around the globe and grow your business.

Here are three things you can expect during and after the event:

1. Hear about Google’s newest games products for developers

The summit kicks off at 9 a.m. PT, with keynotes from teams across Android, Google Play, Ads and Cloud. They’ll discuss the latest trends in the gaming industry and share new products we’re working on to help developers build great experiences for gamers everywhere.

2. Learn how to grow your games business in on-demand sessions

Following the keynotes, more than 15 on-demand sessions will be available starting at 10 a.m. PT, where you can learn more about upcoming products, watch technical deep dives and hear inspiring stories from other game developers. Whether you’re looking to expand your reach, reduce cheating or better understand in-game ad formats, there will be plenty of content to help you take your game to the next level.

3. Join us at the Game Developers Conference

If you’re looking for even more gaming content after the summit, join us in person for the Game Developers Conference in San Francisco. We’ll host developer sessions on March 20 and 21 to share demos, technical best practices and more.

Visit g.co/gamedevsummit to learn more and get updates about both events, including the full agendas. See you there!