Stay productive with these Google features on iOS

If you use Google apps to get work done on your iPhone or iPad, we’re making some improvements to help you stay organized and productive.

Keep on top of your inbox with the new Gmail widget

Thanks to your helpful feedback on our first Gmail widget, we’re adding a new one so you can better manage your inbox on iOS. With the new widget, you’ll see the senders and subjects of your most recent emails right on your Home Screen.

Gif of the Gmail widget, fading from a gray to a black background. It shows an icon with a picture of a dog next to three emails. Each email shows the sender and subject line.

The new Gmail widget will put more of your inbox on your Home Screen

Multitask with Google Meet

ICYMI, we recently made multitasking easier on Google Meet. With Picture-in-Picture support, you can still participate in your meeting as you move between apps on your iOS device.

For example, you might want to forward an email, share a document or just look something up while you’re chatting. Simply navigate out of the Google Meet app, and your meeting will be minimized in a window that you can move around your Home Screen. You can also resize the meeting window, or slide it off to the side if you need more space to get something else done.

We’re launching this same feature on the Gmail app in the next few weeks. Stay tuned.

A gif showing different examples of the Picture-in-Picture screen floating above other screens on the iPhone. It transitions from Gmail, to Photos, to a Google Doc while the Google Meeting screen is in the foreground.

Picture-in-picture supports multitasking with Google Meet

Do more with Google Sheets

If you work with spreadsheets, keyboard shortcuts can be really useful. So we’re adding shortcut support to Google Sheets on iOS.

Shortcuts make it easier to complete common and advanced tasks on Google Sheets using a small keyboard — like selecting a whole row or finding and replacing certain values. Shortcuts will also work if you’re using a Bluetooth or Magic Keyboard on your iPad. Just hold down the command key to see the available shortcuts.

A gif cycling through Google Sheets screens, showing the pop-up keyboard shortcut menu on an iPad.

Get more done in Google Sheets for iOS, with new keyboard shortcuts

We hope you enjoy these new features launching in the next few weeks, and that they help make it easier to get your work done on iOS devices.

Expanding Google Summer of Code in 2022

We are pleased to announce that in 2022 we’re broadening our scope of Google Summer of Code (GSoC) with exciting new updates to the program.

For 17 years, GSoC has focused on bringing new open source contributors into OSS communities big and small. GSoC has brought over 18,000 university students from 112 countries together with over 17K mentors from 746 open source organizations.

At its heart, GSoC is a mentorship program where people interested in learning more about open source are welcomed into our open source communities by excited mentors ready to help them learn and grow as developers. The goal is to have these new contributors stay involved in open source communities long after their Google Summer of Code program is over.

Over the course of GSoC’s 17 years, open source has grown and evolved, and we’ve realized that the program needs to evolve as well. With that in mind, we have several major updates to the program coming in 2022, aimed at better meeting the needs of our open source communities and providing more flexibility to both projects and contributors so that people from all walks of life can find, join and contribute to great open source communities.

Expanding eligibility

Beginning in 2022, we are opening the program up to all newcomers of open source that are 18 years and older. The program will no longer be solely focused on university students or recent graduates. We realize there are many folks that could benefit from the GSoC program that are at various stages of their career, recent career changers, self-taught, those returning to the workforce, etc. so we wanted to allow these folks the opportunity to participate in GSoC.

We expect many students to continue applying to the program (which we encourage!), yet we wanted to provide excited individuals who want to get into open source—but weren’t sure how to get started or whether open source communities would welcome their newbie contributions—with a place to start.

Many people can benefit from mentorship programs like GSoC and we want to welcome more folks into open source.

Multiple Sizes of Projects

This year we introduced the concept of a medium sized project in response to the many distractions folks were dealing with during the pandemic. This adjustment was beneficial for many participants and organizations but we also heard feedback that the larger, more complex projects were a better fit for others. In the spirit of flexibility, we are going to support both medium sized projects (~175 hours) and large projects (~350 hours) in 2022.

One of our goals is to find ways to get more people from different backgrounds into open source which means meeting people where they are at and understanding that not everyone can devote an entire summer to coding.

Increased Flexibility of Timing for Projects

For 2022, we are allowing for considerable flexibility in the timing for the program. You can spread the project out over a longer period of time and you can even switch to a longer timeframe mid-program if life happens. Rather than a mandatory 12-week program that runs from June – August with everyone required to finish their projects by the end of the 12th week, we are opening it up so mentors and their GSoC Contributors can decide together if they want to extend the deadline for the project up to 22 weeks.
Image with text reads 'Google Summer of Code'

Interested in Applying to GSoC?

We will announce the GSoC 2022 program timeline soon.

Open Source Organizations

Does your open source project want to learn more about how to apply to be a mentoring organization? This is a mentorship program focused on welcoming new contributors into your community and helping them learn best practices that will help them be long term OSS contributors. A key factor is having plenty of mentors excited about teaching newcomers about open source.

Read the mentor guide, to learn more about what it means to be a mentor organization, how to prepare your community, creating appropriate project ideas (175 hour and 350 hour projects), and tips for preparing your application.

Want to be a GSoC Contributor?

Are you a potential GSoC Contributor interested in learning how to prepare for the 2022 GSoC program? It’s never too early to start thinking about your proposal or about what type of open source organization you may want to work with. Read through the student/contributor guide for important tips on preparing your proposal and what to consider if you wish to apply for the program in 2022. You can also get inspired by checking out the 199 organizations that participated in Google Summer of Code 2021, as well as the projects that students worked on.

We encourage you to explore other resources and you can learn more on the program website.

Please spread the word to your friends as we hope these updates to the program will help more excited folks apply to be GSoC Contributors and mentoring organizations in GSoC 2022!


By Stephanie Taylor, Program Manager, Google Open Source

Model Ensembles Are Faster Than You Think

When building a deep model for a new machine learning application, researchers often begin with existing network architectures, such as ResNets or EfficientNets. If the initial model’s accuracy isn’t high enough, a larger model may be a tempting alternative, but may not actually be the best solution for the task at hand. Instead, better performance potentially could be achieved by designing a new model that is optimized for the task. However, such efforts can be challenging and usually require considerable resources.

In “Wisdom of Committees: An Overlooked Approach to Faster and More Accurate Models”, we discuss model ensembles and a subset called model cascades, both of which are simple approaches that construct new models by collecting existing models and combining their outputs. We demonstrate that ensembles of even a small number of models that are easily constructed can match or exceed the accuracy of state-of-the-art models while being considerably more efficient.

What Are Model Ensembles and Cascades?
Ensembles and cascades are related approaches that leverage the advantages of multiple models to achieve a better solution. Ensembles execute multiple models in parallel and then combine their outputs to make the final prediction. Cascades are a subset of ensembles, but execute the collected models sequentially, and merge the solutions once the prediction has a high enough confidence. For simple inputs, cascades use less computation, but for more complex inputs, may end up calling on a greater number of models, resulting in higher computation costs.

Overview of ensembles and cascades. While this example shows 2-model combinations for both ensembles and cascades, any number of models can potentially be used.

Compared to a single model, ensembles can provide improved accuracy if there is variety in the collected models’ predictions. For example, the majority of images in ImageNet are easy for contemporary image recognition models to classify, but there are many images for which predictions vary between models and that will benefit most from an ensemble.

While ensembles are well-known, they are often not considered a core building block of deep model architectures and are rarely explored when researchers are developing more efficient models (with a few notable exceptions [1, 2, 3]). Therefore, we conduct a comprehensive analysis of ensemble efficiency and show that a simple ensemble or cascade of off-the-shelf pre-trained models can enhance both the efficiency and accuracy of state-of-the-art models.

To encourage the adoption of model ensembles, we demonstrate the following beneficial properties:

  1. Simple to build: Ensembles do not require complicated techniques (e.g., early exit policy learning).
  2. Easy to maintain: Ensembles are trained independently, making them easy to maintain and deploy.
  3. Affordable to train: The total training cost of models in an ensemble is often lower than a similarly accurate single model.
  4. On-device speedup: The reduction in computation cost (FLOPS) successfully translates to a speedup on real hardware.

Efficiency and Training Speed
It’s not surprising that ensembles can increase accuracy, but using multiple models in an ensemble may introduce extra computational cost at runtime. So, we investigate whether an ensemble can be more accurate than a single model that has the same computational cost. We analyze a series of models, EfficientNet-B0 to EfficientNet-B7, that have different levels of accuracy and FLOPS when applied to ImageNet inputs. The ensemble predictions are computed by averaging the predictions of each individual model.

We find that ensembles are significantly more cost-effective in the large computation regime (>5B FLOPS). For example, an ensemble of two EfficientNet-B5 models matches the accuracy of a single EfficientNet-B7 model, but does so using ~50% fewer FLOPS. This demonstrates that instead of using a large model, in this situation, one should use an ensemble of multiple considerably smaller models, which will reduce computation requirements while maintaining accuracy. Moreover, we find that the training cost of an ensemble can be much lower (e.g., two B5 models: 96 TPU days total; one B7 model: 160 TPU days). In practice, model ensemble training can be parallelized using multiple accelerators leading to further reductions. This pattern holds for the ResNet and MobileNet families as well.

Ensembles outperform single models in the large computation regime (>5B FLOPS).

Power and Simplicity of Cascades
While we have demonstrated the utility of model ensembles, applying an ensemble is often wasteful for easy inputs where a subset of the ensemble will give the correct answer. In these situations, cascades save computation by allowing for an early exit, potentially stopping and outputting an answer before all models are used. The challenge is to determine when to exit from the cascade.

To highlight the practical benefit of cascades, we intentionally choose a simple heuristic to measure the confidence of the prediction — we take the confidence of the model to be the maximum of the probabilities assigned to each class. For example, if the predicted probabilities for an image being either a cat, dog, or horse were 20%, 80% and 20%, respectively, then the confidence of the model's prediction (dog) would be 0.8. We use a threshold on the confidence score to determine when to exit from the cascade.

To test this approach, we build model cascades for the EfficientNet, ResNet, and MobileNetV2 families to match either computation costs or accuracy (limiting the cascade to a maximum of four models). By design in cascades, some inputs incur more FLOPS than others, because more challenging inputs go through more models in the cascade than easier inputs. So we report the average FLOPS computed over all test images. We show that cascades outperform single models in all computation regimes (when FLOPS range from 0.15B to 37B) and can enhance accuracy or reduce the FLOPS (sometimes both) for all models tested.

Cascades of EfficientNet (left), ResNet (middle) and MobileNetV2 (right) models on ImageNet. When using similar FLOPS, cascades obtain a higher accuracy than single models (shown by the red arrows pointing up). Cascades can also match the accuracy of single models with significantly fewer FLOPS e.g., 5.4x for B7 (green arrows pointing left).
Summary of accuracy vs. FLOPS for ensembles and cascades. Squares and stars represent ensembles and cascades, respectively,, and the “+” notation indicates the models that comprise the ensemble or cascade. For example, ”B3+B4+B5+B7” at a star refers to a cascade of EfficientNet-B3, B4, B5 and B7 models.

In some cases it is not the average computation cost but the worst-case cost that is the limiting factor. By adding a simple constraint to the cascade building procedure, one can guarantee an upper bound to the computation cost of the cascade. See the paper for more details.

Other than convolutional neural networks, we also consider a Transformer-based architecture, ViT. We build a cascade of ViT-Base and ViT-Large models to match the average computation or accuracy of a single state-of-the-art ViT-Large model, and show that the benefit of cascades also generalizes to Transformer-based architectures.

        Single Models Cascades - Similar Throughput    Cascades - Similar Accuracy
Top-1 (%) Throughput Top-1 (%) Throughput △Top-1 Top-1 (%) Throughput SpeedUp
ViT-L-224 82.0 192 83.1 221 1.1 82.3 409 2.1x
ViT-L-384 85.0 54 86.0 69 1.0 85.2 125 2.3x
Cascades of ViT models on ImageNet. “224” and “384” indicate the image resolution on which the model is trained. Throughput is measured as the number of images processed per second. Our cascades can achieve a 1.0% higher accuracy than ViT-L-384 with a similar throughput or achieve a 2.3x speedup over that model while matching its accuracy.

Earlier works on cascades have also shown efficiency improvements for state-of-the-art models, but here we demonstrate that a simple approach with a handful of models is sufficient.

Inference Latency
In the above analysis, we average FLOPS to measure the computational cost. It is also important to verify that the FLOPS reduction obtained by cascades actually translates into speedup on hardware. We examine this by comparing on-device latency and speed-up for similarly performing single models versus cascades. We find a reduction in the average online latency on TPUv3 of up to 5.5x for cascades of models from the EfficientNet family compared to single models with comparable accuracy. As models become larger the more speed-up we find with comparable cascades.

Average latency of cascades on TPUv3 for online processing. Each pair of same colored bars has comparable accuracy. Notice that cascades provide drastic latency reduction.

Building Cascades from Large Pools of Models
Above, we limit the model types and only consider ensembles/cascades of at most four models. While this highlights the simplicity of using ensembles, it also allows us to check all combinations of models in very little time so we can find optimal model collections with only a few CPU hours on a held out set of predictions.

When a large pool of models exists, we would expect cascades to be even more efficient and accurate, but brute force search is not feasible. However, efficient cascade search methods have been proposed. For example, the algorithm of Streeter (2018), when applied to a large pool of models, produced cascades that matched the accuracy of state-of-the-art neural architecture search–based ImageNet models with significantly fewer FLOPS, for a range of model sizes.

Conclusion
As we have seen, ensemble/cascade-based models obtain superior efficiency and accuracy over state-of-the-art models from several standard architecture families. In our paper we show more results for other models and tasks. For practitioners, this outlines a simple procedure to boost accuracy while retaining efficiency using off-the-shelf models. We encourage you to try it out!

Acknowledgement
This blog presents research done by Xiaofang Wang (while interning at Google Research), Dan Kondratyuk, Eric Christiansen, Kris M. Kitani, Yair Alon (prev. Movshovitz-Attias), and Elad Eban. We thank Sergey Ioffe, Shankar Krishnan, Max Moroz, Josh Dillon, Alex Alemi, Jascha Sohl-Dickstein‎, Rif A Saurous, and Andrew Helton for their valuable help and feedback.

Source: Google AI Blog


Taking the leap to pursue a passion in Machine Learning with Leigh Johnson #IamaGDE

Welcome to #IamaGDE - a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.

Leigh Johnson turned her childhood love of Geocities and Neopets into a web development career, and then trained her focus on Machine Learning. Now, she’s a staff software engineer at Slack, a Google Developer Expert in Web and Machine Learning, and founder of Print Nanny, an automated failure detection system and monitoring system for 3D printers.

Meet Leigh Johnson, Google Developer Expert in Web and Machine Learning.

Image shows GDE Leigh Johnson, smiling at the camera and holding a circuit board of some kind

GDE Leigh Johnson

The early days

Leigh Johnson grew up in the Bronx, NY, and got an early start in web development when she became captivated by Geocities and Neopets in elementary school.

“I loved the power of being able to put something online that other people could see, using just HTML and CSS,” she says.

She started college and studied Latin, but it wasn’t the right fit for her, so she dropped out and launched her own business building WordPress sites for small businesses, like local restaurants putting their menus online for the first time or taking orders through a form.

“I was 18, running around a data center trying to rack servers and teaching myself DNS to serve my customer base, which was small business owners,” she says. “I ran my business for five years, until companies like Squarespace and Wix started to edge me out of the market a little bit.”

Leigh went on to chase her dream of working in the video game industry, where she got exposed to low-level C++ programming, graphics engines, and basic statistics, which led her to machine learning.

Image shows GDE Leigh Johnson, smiling at the camera and standing in front of a presentation screen at SFPython

Machine learning

At the video game studio where she worked, Leigh got into Bayesian inference.

“It’s old school machine learning, where you try to predict things based on the probability of previous events,” she explains. “You look at past events and try to predict the probability of future events, and I did this for marketing offers—what’s the likelihood you’d purchase a yellow hat to match your yellow pants?”

In the first month or two of trying email offers, the company made more small dollar sales than they typically made in a year.

“I realized, this is powerful dark magic; I must learn more,” Leigh says.

She continued working for tech startups like Ansible, which was acquired by Red Hat, and Dave.com, doing heavy data lifting.

“Everything about machine learning is powered by being able to manipulate and get data from point A to point B,” she says.

Today, Leigh works on machine learning and infrastructure at Slack and is a Google Developer Expert in machine learning. She also has a side project she runs: Print Nanny.

Image shows circuit board with fan next to image of its schematics

Print Nanny: Monitoring 3D printers

When Leigh got into 3D printing as a hobby during the COVID-19 shutdown, she discovered that 3D printers can be unreliable and lack sophisticated monitoring programs.

“When I assembled my 3D printer myself, I realized that over time, the calibration is going to change,” she says. “It's a very finicky process, and it didn't necessarily guarantee the quality of these traditional large batch manufacturing processes.”

She installed a nanny cam to watch her 3D printer and researched solutions, knowing from her machine learning experience that because 3D printers build a print up layer by layer, there’s no one point of failure—failure happens layer by layer, over time. So she wrote that algorithm.

“I saw an opportunity to take some of the traditional machine intelligence strategies used by large manufacturers to ensure there’s a certain consistency and quality to the things they produce, and I made Print Nanny,” she says. “It uses a Raspberry Pi, a credit card-sized computer that costs $30. You can stick a computer vision model on one and do offline inference, which are basically predictions about what the camera sees. You can make predictions about whether a print will fail, help score calculations, and attenuate the print.”

Leigh used Google Cloud Platform AutoML Vision, Google Cloud Platform IoT Core, TensorFlow Model Garden, and TensorFlow.js to build Print Nanny. Using GCP credits provided by Google, she improved and developed Print Nanny with TensorFlow and Google Cloud Platform products.

When Print Nanny detects that a print is failing, the user receives a notification and can remotely pause or stop the printer.

“Print Nanny is an automated failure detection system and monitoring system for 3D printers, which uses computer vision to detect defects and alert you to potential quality or safety hazards,” Leigh says.

Leigh has hired team members who are interested in machine learning to help her with the technical aspects of Print Nanny. Print Nanny currently has 2100 users signed up for a closed beta, with 200 people actively using the beta version. Of that group, 80% are hobbyists and 20% are small business owners. Print Nanny is 100% open source.

Image shows a collection of 3D-Printed parts

Becoming a GDE

Leigh got involved with the GDE program about four years ago, when she began putting machine learning models on Raspberry Pis and building robots. She began writing tutorials about what she was learning.

“The things I was doing were quite hard: TensorFlow Light, the mobile device of TensorFlow—there was a missing documentation opportunity there, and my target platform, the Raspberry Pi, is a hobbyist platform, so there was a little bit of missing documentation there,” Leigh says. “For a hobbyist who wanted to pick up a Raspberry Pi and do a computer vision project for the first time, there was a missing tutorial there, so I started writing about what I was doing, and the response was tremendous.”

Leigh’s work caught the eye of Google staff research engineer Pete Warden, the technical Lead of the TensorFlow Mobile team, who encouraged her, and she leveraged the GDE program to connect to Google experts on TensorFlow and machine learning. Google provides a machine learning course for developers and supports TensorFlow, in addition to its many AI products.

“I had no knowledge of graph programming or what it meant to adapt the low-level kernel operations that would run on a Raspberry Pi, or compiling software, and I learned all that through the GDE program,” Leigh says. “This program changed my life.”

Image shows 1 man and three women smiling at the camera. Leigh is taking the photo selfie-style

Leigh’s favorite part of the GDE program is going to events like TensorFlow World, which she last attended in 2019, and GDE summits. She hadn’t travelled internationally until she was in her 20’s, so the GDE program has connected her to the international community.

“It’s been life-changing,” she says. “I never would have had access to that many perspectives. It’s changed the way I view the world, my life, and myself. It’s very powerful.”

Leigh smiles at the camera in front of a sign that reads TensorFlow for mobile and edge devices

Leigh’s advice to future developers

Leigh recommends that people find the best environment for themselves and adopt a growth mindset.

“The best advice that I can give is to find your motivation and find the environment where you can be successful,” she says. “Surround yourself with people who are lifelong learners. When you cultivate an environment of learning around you, it's this positive, self-perpetuating process.”

#AndroidDevSummit: Jetpack Compose now with Material You

Posted by Nick Butcher Developer Relations Engineer


The Android Dev Summit last month brought a number of exciting updates to Jetpack Compose, including that Material You, Google's new design language, is now available in Compose. In case you missed it, here's a recap of all the announcements.


New Releases: Jetpack Compose 1.1 beta and compose-material3

We released Jetpack Compose 1.1 beta. This means that new APIs in 1.1 are now stable, offering new functionality and performance improvements. 1.1 includes new features like improved focus handling & touch target sizing or `ImageVector` caching and support for Android 12 stretch overscroll. Compose 1.1 also graduates a number of previously experimental APIs to stable and supports newer versions of Kotlin. We've already updated our samples, codelabs and Accompanist library to work with Compose 1.1.

We released compose-material3. This is a brand new artifact for building Material You UIs with Jetpack Compose. It offers updated components and color system, including support for dynamic color, creating a personalized color palette from a user's wallpaper. This is our first alpha so we welcome your feedback as we continue to add features and iterate on the APIs. Check out the new m3.material.io website to learn more about Material Design 3 and find tools to help you design & build with dynamic color, like the Material Theme Builder.



More Guidance & Documentation for Jetpack Compose

We released a ton of talks about Jetpack Compose, providing deep dives into layout, animation and state, showed how to use Compose across Wear OS, homescreen widgets and Large Screens and held 3 code-alongs; live coding your first Compose app, migrating an existing app or using compose on Wear OS. Finally we held a panel discussion, answering your burning questions about Jetpack Compose and Material.

We also expanded the Compose documentation, including new guides on the Phases of Jetpack Compose, Building Adaptive Layouts and expanded theming guidance including guidance for Material 3.


Tooling updates in Android Studio Bumblebee

At ADS, Android Studio Bumblebee entered Beta, bringing richer support for Jetpack Compose including:

Android Studio Chipmunk canaries also introduced a new template for Compose (and View based) Material 3 applications.

New Project launch webpage

Handoff

Lastly, we gave a sneak peak of some new tooling for design handoff, enabling you to export components designed in Figma to generate idiomatic Jetpack Compose code. You can iterate on the designs and pull in new changes, and safely edit the generated code. We're looking for a small group of teams to work directly with, so go sign up.

Jetpack Compose is stable and ready for production. We’ve been thrilled to see tens of thousands of apps start using Jetpack Compose in production and we continue to build our roadmap of features to enable you to use Compose to create excellent apps, across devices.

Building a more equitable workplace

When we established our racial equity commitments in June 2020, we started with a concerted focus on building equity with and for the Black community as part of our ongoing work to build a Google where everyone belongs. Over the past year, we’ve provided regular updates on our progress.

Through this work, we've found new ways to support all groups who have historically been underrepresented in the tech industry, and to improve our products so they work better for everyone. Here’s a look at our latest efforts.

Building a more representative workforce

We set out to improve leadership representation of Black+, Latinx+ and Native American+ Googlers in the U.S. by 30% by 2025. We’ve already reached our goal, and we’re on track to double the number of Black+ Googlers at all other levels in the U.S. by 2025.

Hiring alone isn’t enough. We’re continually investing in onboarding, progressing and retaining our underrepresented employees. This year, we ran an onboarding pilot to provide a sense of community, and targeted support and mentorship for Black new hires in the U.S., including providing an onboarding roadmap, resources and virtual seminars. New employees at the Director level were also paired with buddies in the Black Leadership Advisory Group (BLAG). We’ve seen positive feedback from this program — in fact, 80% of respondents to questions about their pilot experience said they would recommend it. We'll take what we’ve learned and roll out a six-month onboarding program for Black new hires globally early next year.

We’re building a similar program for Latino Googlers, and many of our Employee Resource Groups have worked with us to establish a Noogler Buddy program. And in Europe, the Middle East and Africa, Black employees can opt in to receive one-on-one mentorship and external executive coaching during the second half of this year — regardless of tenure.

We continue to invest in fair and consistent performance reviews, promotion and pay outcomes. And we know leadership engagement is critical in this area, so all VPs are now evaluated on their leadership in support of diversity, equity and inclusion, which factors into their ratings and pay.

Ensuring our products work for everyone

We’re also continuing to build products that work for all users. Last month, we launched the Pixel 6 with an improved camera, plus face detection and editing products, which we call Real Tone — specifically to power images with more brightness, depth and detail across skin tones. And we’re continuing our work to take down videos with misinformation, removing roughly 10 million a quarter.

The call for product inclusion and equity ideas to support the Black community resulted in 80 new projects since 2020, including making a Black-owned business attribute available to merchants in the U.S. We also worked closely with the United States Hispanic Chamber of Commerce (USHCC) to unveil a new Latino-owned attribute in Google Business Profiles to help Latino-owned businesses get discovered in Google Search and Maps. We’re also creating Grow with Google digital resource centers with USHCC that will train an additional 10,000 Latino business owners on how to use digital tools to grow their business.

Creating pathways to tech

Back in June, we granted Historically Black Colleges and Universities (HBCUs) $50 million in unrestricted funding so these institutions could invest in their communities and the future workforce as they see fit. For example, North Carolina A&T State University is putting $150,000 towards curriculum development in pre-college programs for aspiring science, technology, engineering and mathematics (STEM) students. Morgan State University has dedicated $1 million to computer science operations, which includes new ideation lab spaces and equipment enhancements. Additionally, as part of our $15 million investment in the Latino community, we’re providing a $1 million grant to Hispanic Federation to help Latino-led and Latino-serving nonprofits train more than 6,000 individuals in career-aligned digital skills over the next year.

We’ve also partnered with the Thurgood Marshall College Fund, the Hispanic Association of Colleges and Universities and Partnership with Native Americans to bring digital skills and workforce training to HBCUs, Hispanic Serving Institutions (HSIs) and Native Serving Organizations (NSOs) through the Grow with Google Career Readiness program. In total, Google has committed to training more than 250,000 Black, Latino and Indigenous students by 2025. And through Grow with Google: Black Women Lead, we’re providing 100,000 Black women with career development and digital skills training by spring 2022.

We're also expanding the paths to technology outside the U.S. For example, in Brazil, we launched the second class of Next Step, an internship program exclusively for Black students that removes the prerequisite for English.

Providing opportunities for economic advancement

Last year, we announced a goal to spend $100 million with Black-owned suppliers, as part of our broader supplier diversity commitment to spend more than $1 billion with diverse-owned suppliers in the U.S. every year. To date, we’ve paid out nearly $1.1 billion to diverse-owned suppliers, exceeding our $1 billion goal for 2021. We are also on track to meet our $100 million commitment toward Black-owned suppliers for 2021.

We continue to offer resources for Black-owned businesses through programs like the Google Storefront Kits program, which provides small businesses with free Google Nest and Pixel devices, alongside free installation and Grow with Google online training. In the first 60 days of the program, we donated 3,000 Nest and Pixel devices to more than 550 Black-owned businesses across the U.S. We’ve updated the kits based on business owners’ feedback and aim to reach an additional 1,200 Black-owned businesses across more cities in the U.S.

Google's commitment of $185 million has enabled Opportunity Finance Network (OFN) to establish the Grow with Google Small Business Fund and OFN's Grant Program, funded by Google.org to assist Community Development Financial Institutions (CDFIs) working with underserved small businesses. To date, over $149 million in loans and grants has been disbursed to OFN member CDFIs, including $50 million to support Black-owned businesses.

We’re focusing on communities outside the United States, too. For example, in addition to the $15 million we invested in Black and Latino founders in the U.S., we’ve invested in 50 Black-owned startups in Africa, 29 Black-owned startups in Brazil and 30 Black-owned startups in Europe.

We’re also partnering with financial institutions like BlackRock, Goldman Sachs and JP Morgan to launch money market funds that promote racial equity. We’ve invested more than $1 billion in products that generate revenue for diverse-led financial institutions, like Loop Capital, and support programs like the One Million Black Women Initiative and the Thurgood Marshall College Fund.

Our racial equity work is an important part of our company-wide commitment to diversity, equity and inclusion. It takes thoughtful engagement with our underrepresented employees, including the Asian and Pacific Islander, Black, Latino and Native American communities — as well as people with disabilities, those who identify as LGBTQ+ and those who come from different religious backgrounds. Through this work, we’ll build a Google where everyone belongs and more helpful products for our users and the world.

Copy a single page or subset of pages in new Google Sites

Quick launch summary

In new Google Sites, we’re adding the ability for editors to copy a single page or subset of pages into a new site. Previously, it was only possible to make a copy of an entire site. This feature gives site editors more control, allowing users to reuse part of a site or easily break up a large site into smaller sites. 

We hope this feature, in addition to other recent site editing capabilities such as restoring a specific page from a site, make it easier for site editors to collaborate on large sites.

Getting started

To copy specific pages in Google Sites, select Make a copy > Pages > Selected Pages from the three-dot overflow menu.


Rollout pace


Availability


Shop what’s trending this holiday season

It’s that festive time of year when we start planning holiday gatherings, swap out our iced lattes for hot cocoa, and once again, try to find the perfect gifts for our loved ones.

To help you zip through your holiday shopping this year, we’re sharing the Google Shopping Holiday 100 — a list of what we predict will be the 100 most popular categories and products in the U.S. for the holiday season, according to Google searches.

This year’s Holiday 100 reflects the realities many of us are still living in. Home equipment like coffee makers and fitness gear continue to make the list. But we’re also seeing more items that suggest people are getting out more, like fragrances and beauty products.

Spanning ages and interests, the 2021 trending categories include tech, gaming, kitchen gear, sports and fitness, health and beauty, fragrances, and toys and games. Let’s explore a few of them.

The year of the fragrance

Looking for a new fragrance to add to your collection? You’re not alone. This year, perfumes and colognes are trending higher than in years past. It could be that we’re all after a new signature scent, or maybe we’re sending hints to those who have been in their (less than fresh) loungewear for too long. Ranking highest on the list includes Christian Dior Sauvage Eau De Toilette Spray, Maison Francis Kurkdjian Baccarat Rouge 540, and Versace Men's Eros Eau de Toilette Travel Spray.

Brewing at home

While coffee makers are no stranger to the kitchen gear category, we saw an uptick in the number and types of at-home coffee makers on this year’s list. So no matter how your loved ones take their cup of joe, you’re sure to find something for them. The top coffee maker is the Breville Barista Express Espresso Machine, followed by the Keurig K-Mini Single Serve Coffee Maker.

Gaming goes big

Gaming has become an increasingly popular category over the years, and 2021 is no exception. At the top of the gaming consoles list is the Nintendo Switch OLED, and top trending games include NBA 2K22, FIFA 22 and Metroid Dread. And you won’t just find the latest and greatest gaming products. This year, we saw 80’s and 90’s nostalgia play out with high rankings for the Nintendo 64 (ranked #6 in gaming consoles) and the Game Boy (ranked #8 in gaming consoles).

This holiday season, Google Shopping is helping you find deals, track the price of a specific item, check a product’s availability and more. Whether you’re looking for your family, your best friend, or even yourself, we hope you find the perfect gift with Google.

Shop what’s trending this holiday season

It’s that festive time of year when we start planning holiday gatherings, swap out our iced lattes for hot cocoa, and once again, try to find the perfect gifts for our loved ones.

To help you zip through your holiday shopping this year, we’re sharing the Google Shopping Holiday 100 — a list of what we predict will be the 100 most popular categories and products in the U.S. for the holiday season, according to Google searches.

This year’s Holiday 100 reflects the realities many of us are still living in. Home equipment like coffee makers and fitness gear continue to make the list. But we’re also seeing more items that suggest people are getting out more, like fragrances and beauty products.

Spanning ages and interests, the 2021 trending categories include tech, gaming, kitchen gear, sports and fitness, health and beauty, fragrances, and toys and games. Let’s explore a few of them.

The year of the fragrance

Looking for a new fragrance to add to your collection? You’re not alone. This year, perfumes and colognes are trending higher than in years past. It could be that we’re all after a new signature scent, or maybe we’re sending hints to those who have been in their (less than fresh) loungewear for too long. Ranking highest on the list includes Christian Dior Sauvage Eau De Toilette Spray, Maison Francis Kurkdjian Baccarat Rouge 540, and Versace Men's Eros Eau de Toilette Travel Spray.

Brewing at home

While coffee makers are no stranger to the kitchen gear category, we saw an uptick in the number and types of at-home coffee makers on this year’s list. So no matter how your loved ones take their cup of joe, you’re sure to find something for them. The top coffee maker is the Breville Barista Express Espresso Machine, followed by the Keurig K-Mini Single Serve Coffee Maker.

Gaming goes big

Gaming has become an increasingly popular category over the years, and 2021 is no exception. At the top of the gaming consoles list is the Nintendo Switch OLED, and top trending games include NBA 2K22, FIFA 22 and Metroid Dread. And you won’t just find the latest and greatest gaming products. This year, we saw 80’s and 90’s nostalgia play out with high rankings for the Nintendo 64 (ranked #6 in gaming consoles) and the Game Boy (ranked #8 in gaming consoles).

This holiday season, Google Shopping is helping you find deals, track the price of a specific item, check a product’s availability and more. Whether you’re looking for your family, your best friend, or even yourself, we hope you find the perfect gift with Google.

This Googler is dedicated to making a difference

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.

Today’s story is all about Lerato Seopela from our Johannesburg office. Lerato shares her path from management consultancy to marketing at Google, plus her passion for sustainability and beekeeping at home.

What do you do at Google?

I’m an Associate Product Marketing Manager (APMM) for the Ads Marketing team in Sub-Saharan Africa. My work often comes to life through local tool launches and events that share insights and practical tips with clients to help them reach their business goals.

The Google APMM program is a unique career path on the Google Marketing team. As a cohort-based, two-and-a-half-year rotational development program, it provides an active community, leadership roles, and job rotations to help you discover different marketing teams across Google.

I’m also an inclusivity advocate. Since joining Google, I have helped create inclusive marketing campaigns, research, and business training specifically for the LGBTQ+ community in the region.

What have been the driving forces behind your career?

My family has had a huge impact on my career. My parents, aunts and uncles have all achieved success and happiness despite the adversities they faced during the Apartheid regime. The values they’ve instilled in me have influenced how I empower myself and others through education. I feel fulfilled in my career when I know that I’ve contributed to improving the lives of others, whether that’s through supporting people’s business needs or helping them develop new skills.

How would you describe your path to Google?

Before Google, I was a marketing consultant at Discovery Health, an insurance company that encourages people to live healthier. Towards the end of 2019, I decided to look for a new job that would give me the opportunity to build my problem-solving skills, develop strategies and work with different people around the world. At the beginning of 2020, I started a new job as a management consultant at a local management consulting firm. Just before I transitioned to this new role, a recruiter reached out to me on LinkedIn about an open Associate Product Marketing Manager role at Google. After a quick call with her, I immediately began the application and interview process, which all took place virtually. And I was lucky enough to get the role! I joined Google in April 2020, soon after the world was thrust into a global pandemic. Despite not seeing a Google office yet, it’s been an incredible experience working with so many talented people.

What surprised you about the interview process?

I was surprised by the rounds of interviews and the amount of communication from my recruiter throughout the whole process. It was reassuring to have someone to reach out to with questions, and who would proactively keep me updated. Everyone throughout the interview process was so lovely and made an effort to help me feel comfortable. It was a really human experience, and I could get a sense of the company culture from everyone I met.

What gets you most excited in your role?

What excites me most about my role is the breadth of work available, my amazing colleagues, and the tangible and positive impact we are making in the region. I’ve contributed to projects like the Economic Recovery campaign, which helps small businesses, jobseekers, educators and students find their feet and recover during the COVID-19 pandemic. These efforts gave me a sense of purpose during a challenging time, and showed me that I can make a difference in my job. It was inspiring to see how some of the small businesses we worked with not only recovered, but thrived under very difficult circumstances. And working alongside a team dedicated to helping as many people as possible has been one of the proudest moments of my career.

And what excites you outside of your role?

My guilty pleasure is reality TV! I love watching the Real Housewives franchise. I’m also a huge foodie, and I like finding new places to try new food and hang out. To keep level headed, I enjoy Pilates, yoga, and hiking, and recently discovered the benefits of meditation. I’m also an advocate for sustainability and environmental preservation. In fact, I’ve taken up beekeeping to support the declining population of bees around the world.

Any tips for anyone hoping to join Google in Africa?

Have confidence in your ability. Don’t doubt the amazing things that you can do, and the impact you can make across the continent.