Firebase Stories: Celebrating our developer community

Posted by Akua Prempeh Developer Marketing

When we ask you what you like best about Firebase, a lot of you tell us it’s the community that makes Firebase special. We are excited to highlight developers in the community who are using Firebase in their apps through a new series called, Firebase Stories.

Firebase Stories celebrates developers whose apps are helping promote positive change in their communities. Starting today, and over the coming months, you'll hear from developers and founders from around the world about their app development journeys. Additionally, these developers will demo how they are using Firebase tools in their projects so you can apply Firebase to your own apps. Everyone can watch the demos together and chat with both the developers and members of the Firebase team along the way. We’ll also share guided codelabs on these Firebase features so you can get hands-on experience with them. Stay tuned for more details!

Lastly, we’d love to hear from you too. Use the hashtag #FirebaseStories on your social channels to share how Firebase empowers you throughout your app development journey. We will regularly select and share some stories on our channels.

To learn more about this campaign, visit our website, follow us on Twitter and subscribe to the Firebase YouTube channel.

GlobalFoundries joins Google’s open source silicon initiative

Over the last year we have been busy planning the expansion of our free open source silicon design and manufacturing program to further grow the community of developers and companies building custom silicon, and build a thriving ecosystem around open source hardware.

Today, we’re excited to announce an expansion of this program and our partnership with GlobalFoundries. Together, we're releasing the Process Design Kit (PDK) for the GlobalFoundries 180MCU technology platform under the Apache 2.0 license, along with a no-cost silicon realization program to manufacture open source designs on the Efabless platform. This open source PDK is the first result of our ongoing partnership with GF. Based on the scale and breadth of GF’s technology and manufacturing expertise, we expect to do more together to further access and innovation in semiconductor development and manufacturing.
GF180MCU 1P5M 5 metals stack-up, 9kA top metal, with MIM between M3 and M4 layers.
Google started this program with SkyWater Technologies, by releasing one of their PDKs under the Apache 2.0 license. We sponsored six shuttle runs over the course of two years, allowing the open source community to submit more than 350 unique designs of which around 240 were manufactured at no-cost.
We cannot understate the milestone that this new partnership represents in the foundry ecosystem market.

Over the past few years, the world has experienced an unprecedented acceleration of adoption of digital capabilities—driven by the pandemic, and technology megatrends that have shifted every aspect of human life. According to GlobalFoundries, this has led to roughly 73% of foundry revenue being associated with high growth markets such as mobile, IoT, and automotive. This transition has not only given rise to a “New Golden Age” of semiconductors but also a tectonic shift in how we define and deliver innovation as an industry.  

Specifically, applications using 180nm are at a global capacity of 16+ million wafers a year and bound to grow to 22+ million wafers in 2026, according to GlobalFoundries.

The 180nm application space continues to see strong market traction in motor controller, RFID, general purpose MCUs and PMIC, along with emerging applications such as IoT Sensors, Dual Frequency RFID and Motor Drive.

The collaboration between GlobalFoundries and Google will help drive innovation for the application and silicon engineers designing in these high growth areas, and is an unambiguous affirmation of the viability of the open source model for the foundry ecosystem.

The GF 180nm technology platform offers open source silicon designers new capabilities for high volume production, affordability, and more voltage options. This PDK includes the following standard cells
  • Digital standard cells libraries (7-track and 9-track)
  • Low (3.3V), Medium (5V, 6V) and High (10V) voltage devices
  • SRAM macros (64x8, 128x8, 256x8, 512x8)
  • I/O and primitives (Resistors, Capacitors, Transistors, eFuses) cells libraries
Open sourcing more PDKs is a critical step in the development of the open source silicon ecosystem:
  • Open source EDA tools can now add support for multiple process technologies.
  • Researchers can produce fully-reproducible designs against multiple technology baselines.
  • Popular open source IP blocks can be ported to different process technologies.
We cannot build this on our own, we need you: software developers and hardware engineers, researchers and undergrad students, hobbyists and industry veterans, new startups and industry players alike, to bring your fresh ideas and your proven experiences to help us grow the open source silicon ecosystem.

We encourage you to:
By Johan Euphrosine and Ethan Mahintorabi – Hardware Toolchains Team

Efficient Sequence Modeling for On-Device ML

The increasing demand for machine learning (ML) model inference on-device (for mobile devices, tablets, etc.) is driven by the rise of compute-intensive applications, the need to keep certain data on device for privacy and security reasons, and the desire to provide services when a network connection may not be available. However, on-device inference introduces a myriad of challenges, ranging from modeling to platform support requirements. These challenges relate to how different architectures are designed to optimize memory and computation, while still trying to maintain the quality of the model. From a platform perspective, the issue is identifying operations and building on top of them in a way that can generalize well across different product use cases.

In previous research, we combined a novel technique for generating embeddings (called projection-based embeddings) with efficient architectures like QRNN (pQRNN) and proved them to be competent for a number of classification problems. Augmenting these with distillation techniques provides an additional bump in end-to-end quality. Although this is an effective approach, it is not scalable to bigger and more extensive vocabularies (i.e., all possible Unicode or word tokens that can be fed to the model). Additionally, the output from the projection operation itself doesn’t contain trainable weights to take advantage of pre-training the model.

Token-free models presented in ByT5 are a good starting point for on-device modeling that can address pre-training and scalability issues without the need to increase the size of the model. This is possible because these approaches treat text inputs as a stream of bytes (each byte has a value that ranges from 0 to 255) that can reduce the vocabulary size for the embedding tables from ~30,000 to 256. Although ByT5 presents a compelling alternative for on-device modeling, going from word-level representation to byte stream representation increases the sequence lengths linearly; with an average word length of four characters and a single character having up to four bytes, the byte sequence length increases proportionally to the word length. This can lead to a significant increase in inference latency and computational costs.

We address this problem by developing and releasing three novel byte-stream sequence models for the SeqFlowLite library (ByteQRNN, ByteTransformer and ByteFunnelTransformer), all of which can be pre-trained on unsupervised data and can be fine-tuned for specific tasks. These models leverage recent innovations introduced by Charformer, including a fast character Transformer-based model that uses a gradient-based subword tokenization (GBST) approach to operate directly at the byte level, as well as a “soft” tokenization approach, which allows us to learn token boundaries and reduce sequence lengths. In this post, we focus on ByteQRNN and demonstrate that the performance of a pre-trained ByteQRNN model is comparable to BERT, despite being 300x smaller.

Sequence Model Architecture
We leverage pQRNN, ByT5 and Charformer along with platform optimizations, such as in-training quantization (which tracks minimum and maximum float values for model activations and weights for quantizing the inference model) that reduces model sizes by one-fourth, to develop an end-to-end model called ByteQRNN (shown below). First, we use a ByteSplitter operation to split the input string into a byte stream and feed it to a smaller embedding table that has a vocabulary size of 259 (256 + 3 additional meta tokens).

The output from the embedding layer is fed to the GBST layer, which is equipped with in-training quantization and combines byte-level representations with the efficiency of subword tokenization while enabling end-to-end learning of latent subwords. We “soft” tokenize the byte stream sequences by enumerating and combining each subword block length with scores (computed with a quantized dense layer) at each strided token position (i.e., at token positions that are selected at regular intervals). Next, we downsample the byte stream to manageable sequence length and feed it to the encoder layer.

The output from the GBST layer can be downsampled to a lower sequence length for efficient encoder computation or can be used by an encoder, like Funnel Transformer, which pools the query length and reduces the self-attention computation to create the ByteFunnelTransformer model. The encoder in the end-to-end model can be replaced with any other encoder layer, such as the Transformer from the SeqFlowLite library, to create a ByteTransformer model.

A diagram of a generic end-to-end sequence model using byte stream input. The ByteQRNN model uses a QRNN encoder from the SeqFlowLite library.

In addition to the input embeddings (i.e., the output from the embedding layer described above), we go a step further to build an effective sequence-to-sequence (seq2seq) model. We do so by taking ByteQRNN and adding a Transformer-based decoder model along with a quantized beam search (or tree exploration) to go with it. The quantized beam search module reduces the inference latency when generating decoder outputs by computing the most likely beams (i.e., possible output sequences) using the logarithmic sum of previous and current probabilities and returns the resulting top beams. Here the system uses a more efficient 8-bit integer (uint8) format, compared to a typical single-precision floating-point format (float32) model.

The decoder Transformer model uses a merged attention sublayer (MAtt) to reduce the complexity of the decoder self-attention from quadratic to linear, thereby lowering the end-to-end latency. For each decoding step, MAtt uses a fixed-size cache for decoder self-attention compared to the increasing cache size of a traditional transformer decoder. The following figure illustrates how the beam search module interacts with the decoder layer to generate output tokens on-device using an edge device (e.g., mobile phones, tablets, etc.).

A comparison of cloud server decoding and on-device (edge device) implementation. Left: Cloud server beam search employs a Transformer-based decoder model with quadratic time self-attention in float32, which has an increasing cache size for each decoding step. Right: The edge device implementation employs a quantized beam search module along with a fixed-size cache and a linear time self-attention computation.

Evaluation
After developing ByteQRNN, we evaluate its performance on the civil_comments dataset using the area under the curve (AUC) metric and compare it to a pre-trained ByteQRNN and BERT (shown below). We demonstrate that the fine-tuned ByteQRNN improves the overall quality and brings its performance closer to the BERT models, despite being 300x smaller. Since SeqFlowLite models support in-training quantization that reduces model sizes by one-fourth, the resulting models scale well to low-compute devices. We chose multilingual data sources that related to the task for pre-training both BERT and byte stream models to achieve the best possible performance.

Comparison of ByteQRNN with fine-tuned ByteQRNN and BERT on the civil_comments dataset.

Conclusion
Following up on our previous work with pQRNN, we evaluate byte stream models for on-device use to enable pre-training and thereby improve model performance for on-device deployment. We present an evaluation for ByteQRNN with and without pre-training and demonstrate that the performance of the pre-trained ByteQRNN is comparable to BERT, despite being 300x smaller. In addition to ByteQRNN, we are also releasing ByteTransformer and ByteFunnelTransformer, two models which use different encoders, along with the merged attention decoder model and the beam search driver to run the inference through the SeqFlowLite library. We hope these models will provide researchers and product developers with valuable resources for future on-device deployments.

Acknowledgements
We would like to thank Khoa Trinh, Jeongwoo Ko, Peter Young and Yicheng Fan for helping with open-sourcing and evaluating the model. Thanks to Prabhu Kaliamoorthi for all the brainstorming and ideation. Thanks to Vinh Tran, Jai Gupta and Yi Tay for their help with pre-training byte stream models. Thanks to Ruoxin Sang, Haoyu Zhang, Ce Zheng, Chuanhao Zhuge and Jieying Luo for helping with the TPU training. Many thanks to Erik Vee, Ravi Kumar and the Learn2Compress leadership for sponsoring the project and their support and encouragement. Finally, we would like to thank Tom Small for the animated figure used in this post.

Source: Google AI Blog


Chrome Dev for Desktop Update

The Dev channel has been updated to 105.0.5195.19 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvi Bommana
Google Chrome

Supporting Asian-owned businesses in your community

When I was 5, our family moved from New York City to the countryside outside of the city. My brother and I were the only kids of Asian descent in our elementary school. Our father was born in Yamaguchi, Japan to a Japanese mother and American father, and I always felt proud of that — but in this new environment, I instantly felt different.

These early experiences showed me just how important it is to show up for and with communities. Over the past two years, COVID-related small business closures and targeted acts of violence have reinforced the importance and impact of allyship — and have underscored how critical it is to support historically marginalized communities, including our Asian community. That’s why we’re announcing a new way to help Asian-owned businesses thrive.

Celebrating Asian-owned businesses

Starting today, US businesses can now add the Asian-owned attribute to their Business Profile on Search and Maps. In the coming weeks, ad-supported publishers will be able to identify as Asian-owned in Display & Video 360’s Marketplace, too.

A screenshot of East West Shop on Google Maps, showcasing the business identifies as Asian-owned, LGBTQ+ Friendly, and women-owned.

Businesses can opt in to adopt the attribute on their Business Profile and can easily opt out at any time. Once the attribute appears on a Business Profile, users will also be able to see the attribute. This update builds on the Black-owned, Latino-owned, veteran-owned, women-owned andLGBTQ+ owned business attributes, and is another way people can support a diversity of businesses across Google’s products and platforms.

As we were building this feature, we worked with hundreds of Asian-owned businesses to ensure the attribute celebrates our diverse and unique cultures. During that process, I was particularly struck by what Dennys Han, owner of East West Shop, shared with us about the power of community: “If someone is trying to accomplish something, the entire local Korean community will band together to help it come together. The idea of the community and group as a whole uplifting each other is fundamental to what we do.”

Building up Asian-owned businesses’ digital skills

Over the past few years, Grow with Google has partnered with the US Pan Asian American Chamber of Commerce (USPAACC) to help Asian-owned small businesses grow. To date, we’ve helped more than 20,000 Asian-owned businesses expand their digital skills through workshops focusing on topics like e-commerce tools, design thinking for entrepreneurs and making decisions using analytics.

Today, we’re building upon that partnership. Together, USPAACC and Grow with Google will help an additional 10,000 Asian-owned small businesses gain digital skills to help them grow their businesses. And as the internet continues to grow in importance for shopping, nearly one quarter of Asian-owned business owners said their most important channel towards building community and financial support was across social media and online.

It’s our hope the Asian-owned attribute brings people together and provides our communities with much-needed recognition: to help them be seen and thrive. We are excited to spotlight Asian-owned businesses and highlight part of what makes our community unique and important.

A collage of 6 Asian-owned businesses, 3 on the top and 3 on the bottom with the Asian-owned attribute icon in the middle, a circular design with a red and yellow intertwining flower at its’ core. The top row of 3 (from left to right) include: pottery cups and plates on a table with Tortoise General Store owner holding 2 small dishes in the background, Good Hause Marketing Agency Business owner working, holding a marketing design poster board, and 3 t-shirts (black, pink, and white) hanging in East / West Shop. The bottom row of 3 (from left to right) include: the owner of Bollypop in red traditional dress from India twirling, the storefront of Jitlada restaurant, and the owner of Peru Films facing towards the right, looking down, and crossing his arms.

Top left to right:

Tortoise General Store, Owned by Taku and Keiko Shinomoto

Good Hause, Owned by Brittany Tran

East / West Shop, Owned by Dennys Han

Bottom left to right:

Bollypop, Owned by Aakansha Maheshwari

Jitlada, Owned by Sugar Sungkamee

Peru Films, Owned by Tanmay Chowdhary

Source: Google LatLong


Googlers for climate: meet Lisa Arendt

Based in Zürich, Lisa is Product Partnerships Manager for Maps. She helps partners to integrate their charging station locations into Google Maps, which makes recharging as seamless as possible for e-drivers.

And by seamless, she means that charging should be as easy, safe and reliable as it is with petrol- or diesel-powered cars.

She grew up in a small village near Schwerin, where she still goes to unwind. "There were no buses there. Just one empty street and maybe 20 houses. It's the kind of place where you had to make do with a bicycle," she says.

She doesn't even own a car. "In Zürich, you just don't need one." But today, she owns three bicycles: "A mountain bike for taking a spin in the countryside, a fast racing bike and an old city bike that I won't miss if it gets stolen", she says, laughing.

Lisa is always looking for the best way to get around — not just in her free time, but also at work.

The first big step was to display charging stations on Google Maps, making it easier for drivers to find the nearest charging station. The next step is smart route planning, which Volvo, for example, has already integrated into its vehicles.

We want to make charging electric cars as easy and reliable as possible

Travel has become a recurring theme in Lisa's life. On her journeys around the world, she always enjoyed finding her own routes and choosing the best options. But she says there was always a bigger question on her mind: How can we improve mobility? Not just for individuals, but for everyone.

Four years ago, Lisa took inspiration from the climate strikes organized by Greta Thunberg, and realized it was time to act. "The next generation is clearly telling us what they want from us. And they want it now." This growing movement changed the way people look at electric vehicles.

At the same time, Google Maps created a new global division with a whole range of experts and introduced the first electric vehicle (EV) feature on their maps. In 2020, the first fully integrated solution was created in collaboration with Polestar and Volvo, which developed an electric car with Google Assistant, Maps and Play built into its system.

Several major car manufacturers are now collaborating with Google to offer all-in-one solutions like this.

We’re changing, so the planet can remain the same

More and more drivers are now benefiting from the work that Lisa and her team are doing. According to the latest Global Electrical Vehicle Outlook report, in 2021 nearly 10% of global car sales were electric, which is four times the market share in 2019. This brought the total number of electric cars on the world’s roads to about 16.5 million, triple the amount in 2018. Sales in Europe showed robust growth (up 65% to 2.3 million) after the 2020 boom. And at the same time, more and more car-sharing providers and public transport companies are investing in e-mobility or planning to transition in the near future.

Discussions are already taking place to see how Google and Lisa's team can support them along the way. Lisa's number-one priority for the future is to expand the project globally. She and her team have already come a long way by creating a practical online atlas for electric vehicle charging stations. Yet there are countless other ways to make mobility more sustainable in the future.

Source: Google LatLong


Beta Channel Update for ChromeOS

Hello Folks,

The Beta channel is being updated to 104.0.5112.83 (Platform version: 14909.100.0) for most ChromeOS devices.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our Chrome OS communities:
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Google ChromeOS.

Chrome Dev for Android Update

Hi everyone! We've just released Chrome Dev 105 (105.0.5195.17) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Stable Channel Update for Desktop

The Chrome team is delighted to announce the promotion of Chrome 104 to the stable channel for Windows, Mac and Linux.Chrome 104 is also promoted to our new extended stable channel for Windows and Mac. This will roll out over the coming days/weeks.



Chrome 104.0.5112.79 ( Mac/linux) and 
104.0.5112.79/80/81 ( Windows)  contains a number of fixes and improvements -- a list of changes is available in the log. Watch out for upcoming Chrome and Chromium blog posts about new features and big efforts delivered in 104.

Security Fixes and Rewards

Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed




This update includes 27 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[$15000][1325699] High CVE-2022-2603: Use after free in Omnibox. Reported by Anonymous on 2022-05-16

[$10000][1335316] High CVE-2022-2604: Use after free in Safe Browsing. Reported by Nan Wang(@eternalsakura13) and Guang Gong of 360 Alpha Lab on 2022-06-10

[$7000][1338470] High CVE-2022-2605: Out of bounds read in Dawn. Reported by Looben Yang on 2022-06-22

[$5000][1330489] High CVE-2022-2606: Use after free in Managed devices API. Reported by Nan Wang(@eternalsakura13) and Guang Gong of 360 Alpha Lab on 2022-05-31

[$3000][1286203] High CVE-2022-2607: Use after free in Tab Strip. Reported by @ginggilBesel on 2022-01-11

[$3000][1330775] High CVE-2022-2608: Use after free in Overview Mode. Reported by Khalil Zhani on 2022-06-01

[$TBD][1338560] High CVE-2022-2609: Use after free in Nearby Share. Reported by koocola(@alo_cook) and Guang Gong of 360 Vulnerability Research Institute on 2022-06-22

[$8000][1278255] Medium CVE-2022-2610: Insufficient policy enforcement in Background Fetch. Reported by Maurice Dauer on 2021-12-09

[$5000][1320538] Medium CVE-2022-2611: Inappropriate implementation in Fullscreen API. Reported by Irvan Kurniawan (sourc7) on 2022-04-28

[$5000][1321350] Medium CVE-2022-2612: Side-channel information leakage in Keyboard input. Reported by Erik Kraft ([email protected]), Martin Schwarzl ([email protected]) on 2022-04-30

[$5000][1325256] Medium CVE-2022-2613: Use after free in Input. Reported by Piotr Tworek (Vewd) on 2022-05-13

[$5000][1341907] Medium CVE-2022-2614: Use after free in Sign-In Flow. Reported by raven at KunLun lab on 2022-07-05

[$4000][1268580] Medium CVE-2022-2615: Insufficient policy enforcement in Cookies. Reported by Maurice Dauer on 2021-11-10

[$3000][1302159] Medium CVE-2022-2616: Inappropriate implementation in Extensions API. Reported by Alesandro Ortiz on 2022-03-02

[$2000][1292451] Medium CVE-2022-2617: Use after free in Extensions API. Reported by @ginggilBesel on 2022-01-31

[$2000][1308422] Medium CVE-2022-2618: Insufficient validation of untrusted input in Internals. Reported by asnine on 2022-03-21

[$2000][1332881] Medium CVE-2022-2619: Insufficient validation of untrusted input in Settings. Reported by Oliver Dunk on 2022-06-04

[$2000][1337304] Medium CVE-2022-2620: Use after free in WebUI. Reported by Nan Wang(@eternalsakura13) and Guang Gong of 360 Alpha Lab on 2022-06-17

[$1000][1323449] Medium CVE-2022-2621: Use after free in Extensions. Reported by Huyna at Viettel Cyber Security on 2022-05-07

[$1000][1332392] Medium CVE-2022-2622: Insufficient validation of untrusted input in Safe Browsing. Reported by Imre Rad (@ImreRad) and @j00sean on 2022-06-03

[$1000][1337798] Medium CVE-2022-2623: Use after free in Offline. Reported by raven at KunLun lab on 2022-06-20



[$TBD][1339745] Medium CVE-2022-2624: Heap buffer overflow in PDF. Reported by YU-CHANG CHEN and CHIH-YEN CHANG, working with DEVCORE Internship Program on 2022-06-27



We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

As usual, our ongoing internal security work was responsible for a wide range of fixes:

  • [1251653] Various fixes from internal audits, fuzzing and other initiatives


Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.


Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.




Srinivas Sista
Google Chrome

Chrome Stable for iOS Update

Hi everyone! We've just released Chrome Stable 104 (104.0.5112.71) for iOS; it'll become available on App Store in the next few hours.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Harry Souders
Google Chrome