Community and mentorship help women entrepreneurs thrive

EngageRocket co-founder Dorothy Yiu and her husband were eagerly expecting the arrival of their first baby. But as Dorothy was preparing to start her maternity leave, she found herself battling feelings of guilt. She was hesitant to step away from her work responsibilities, her team and the company that she had been pouring her heart into.

EngageRocket — a Singaporean company that helps companies improve their employees’ experiences at work — recently celebrated its fifth anniversary. Dorothy, now pregnant with her second child, says those years have been a time of growth and self discovery for her. Not only has she learned to focus on important things and get rid of her self-inflicted feelings of inadequacy, she has also come to realize how important it is to initiate open conversations about the stigma, insecurity and imposter syndrome so many working women are struggling to overcome.

Today, on Women’s Entrepreneurship Day, stories like Dorothy’s are important reminders that there is still much more work to be done to empower women and girls so they can become confident entrepreneurs and equal participants in business. Promoting diversity and equal opportunities isn’t just the right thing to do – it also has a positive financial impact. Women Will research by Grow with Google notes that closing the entrepreneurial gender gap could boost the global economy by up to $5 trillion.

Group photo of the women founders and the Google for Startups team at their virtual graduation event. The participants are pictured in a grid format smiling at the camera.

The Women Founders Academy cohort of 2021 recently celebrated their graduation

EngageRocket is one of 10 recent graduates from the APAC Women Founders Academy Program by Google for Startups. During this 12-week program, the founders received training and mentoring from Googlers across the region to help sharpen their leadership skills and address their unique growth needs, including funding. All the participants highlighted the important role of communities and mentors in helping them gain knowledge, overcome mental barriers and build confidence.

Many of them noted that to build true equity, it’s important to look past gender divides and recognise people’s achievements as entrepreneurs and professionals, not only as women. Dorothy wholeheartedly agrees. Today, 50% of EngageRocket’s senior management team are women, all of them working mothers. As a leader, she is determined to build an open, flexible company culture that empowers everyone to balance their priorities – both at work and in their personal lives.

At Google, we’re committed to helping more women like Dorothy grow and thrive in business. We know it’s one of the most powerful steps we can take to create new jobs and opportunities, advance equality, and contribute to an inclusive economic recovery that will benefit us all.

Celebrating news partners in the Asia Pacific

One of the best parts of my role is seeing the great examples of news publishers embracing technology to grow new audiences and build sustainable business models in the Asia-Pacific region.

This week, we heard from news partners at the Google News Initiative (GNI) Global Summit, along with local events in Australia, India, Korea, Japan and Southeast Asia, on the impactful work they are doing.

Supporting a more sustainable, diverse and innovative news ecosystem

Our GNI Impact Report (released during the Summit) features stories from publishers in the Asia-Pacific, including one of our partners DataLEADS, who we worked with to provide verification training for thousands of reporters across India.

An 8 squared graphic with stats on the impact of the GNI over the last three years covering $300m+ in funding, 7000+ partners supported over 120+ countries and territories, and 450+ journalists training in 70 countries.

We also heard from Indonesia’s Warta Ekonomi on how they improved their website and user experience, and developed their monetization strategy after taking part in the GNILocal News Foundry.

Highlighting APAC leaders in innovation

The GNI Innovation Challenge program launched in the Asia Pacific three years ago with a call for applications looking at new ideas to generate reader revenue. Since then, GNI Innovation Challenges have supported more than 200 news organizations around the world — and we heard some of their stories at the Summit.

Kumparan, a media organization in Indonesia, received funding from the GNI Innovation Challenge to help create kumparanDerma, a tool that streamlines the donation process for readers to provide aid during disasters and emergencies.

In India, The News Minute used GNI funding to identify a new, sustainable revenue stream that supplemented their existing advertising model. They used data and insights to launch a membership program and about a year and a half after the project began, they hit 3,000 subscribers. This project helped TNM continue to serve their audience with independent journalism.

This picture shows The News Minute team. There are a group of people inside an office room. Some of the group are seated on the floor while others are standing behind them. There are 28 people visible. The room has white walls and the floor is a red color.

The News Minute team

In Korea, Busan Daily used funding from the Innovation Challenge to improve the way they used data to understand their audience. These are just a few of the great examples we heard on how this program has helped publishers.

Continuing our support for news with new products and tools

The GNI Global Summit featured an update on Google News Showcase, our new product experience and licensing program for news, which aims to help publishers engage more deeply with their readers and to help readers find, follow and support news organizations. Since it launched in October 2020, we’ve signed deals with more than 1,000 news publications around the world, including in India, Japan and Australia.

We also announced new features coming to Google Search that help readers find content from local publishers even more easily than before. We’re expanding a feature that we initially launched for COVID searches, so readers will soon see a carousel of local news stories when Google finds local news coverage relevant to their query. This carousel will be available globally.

There are so many great stories from publishers around the world, as well as updates on our ongoing support for the new ecosystem, in the GNI blog collection.

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 97 (97.0.4692.20) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 97 (97.0.4692.20) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

Permutation-Invariant Neural Networks for Reinforcement Learning

“The brain is able to use information coming from the skin as if it were coming from the eyes. We don’t see with the eyes or hear with the ears, these are just the receptors, seeing and hearing in fact goes on in the brain.”
Paul Bach-y-Rita1

People have the amazing ability to use one sensory modality (e.g., touch) to supply environmental information normally gathered by another sense (e.g., vision). This adaptive ability, called sensory substitution, is a phenomenon well-known to neuroscience. While difficult adaptations — such as adjusting to seeing things upside-down, learning to ride a “backwards” bicycle, or learning to “see” by interpreting visual information emitted from a grid of electrodes placed on one’s tongue — require anywhere from weeks, months or even years to attain mastery, people are able to eventually adjust to sensory substitutions.

Examples of Sensory Substitution. Left: Tongue Display Unit (Maris and Bach-y-Rita, 2001; Image: Kaczmarek, 2011). Right: “Upside down goggles” initially conceived by Erismann and Kohler in 1931. (Image Wikipedia).

In contrast, most neural networks are not able to adapt to sensory substitutions at all. For instance, most reinforcement learning (RL) agents require their inputs to be in a pre-specified format, or else they will fail. They expect fixed-size inputs and assume that each element of the input carries a precise meaning, such as the pixel intensity at a specified location, or state information, like position or velocity. In popular RL benchmark tasks (e.g., Ant or Cart-pole), an agent trained using current RL algorithms will fail if its sensory inputs are changed or if the agent is fed additional noisy inputs that are unrelated to the task at hand.

In “The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning”, a spotlight paper at NeurIPS 2021, we explore permutation invariant neural network agents, which require each of their sensory neurons (receptors that receive sensory inputs from the environment) to figure out the meaning and context of its input signal, rather than explicitly assuming a fixed meaning. Our experiments show that such agents are robust to observations that contain additional redundant or noisy information, and to observations that are corrupt and incomplete.

Permutation invariant reinforcement learning agents adapting to sensory substitutions. Left: The ordering of the ant’s 28 observations are randomly shuffled every 200 time-steps. Unlike the standard policy, our policy is not affected by the suddenly permuted inputs. Right: Cart-pole agent given many redundant noisy inputs (Interactive web-demo).

In addition to adapting to sensory substitutions in state-observation environments (like the ant and cart-pole examples), we show that these agents can also adapt to sensory substitutions in complex visual-observation environments (such as a CarRacing game that uses only pixel observations) and can perform when the stream of input images is constantly being reshuffled:

We partition the visual input from CarRacing into a 2D grid of small patches, and shuffled their ordering. Without any additional training, our agent still performs even when the original training background (left) is replaced with new images (right).

Method
Our approach takes observations from the environment at each time-step and feeds each element of the observation into distinct, but identical neural networks (called “sensory neurons”), each with no fixed relationship with one another. Each sensory neuron integrates over time information from only their particular sensory input channel. Because each sensory neuron receives only a small part of the full picture, they need to self-organize through communication in order for a global coherent behavior to emerge.

Illustration of observation segmentation.We segment each input into elements, which are then fed to independent sensory neurons. For non-vision tasks where the inputs are usually 1D vectors, each element is a scalar. For vision tasks, we crop each input image into non-overlapping patches.

We encourage neurons to communicate with each other by training them to broadcast messages. While receiving information locally, each individual sensory neuron also continually broadcasts an output message at each time-step. These messages are consolidated and combined into an output vector, called the global latent code, using an attention mechanism similar to that applied in the Transformer architecture. A policy network then uses the global latent code to produce the action that the agent will use to interact with the environment. This action is also fed back into each sensory neuron in the next time-step, closing the communication loop.

Overview of the permutation-invariant RL method. We first feed each individual observation (ot) into a particular sensory neuron (along with the agent’s previous action, at-1). Each neuron then produces and broadcasts a message independently, and an attention mechanism summarizes them into a global latent code (mt) that is given to the agent's downstream policy network (?) to produce the agent’s action at.

Why is this system permutation invariant? Each sensory neuron is an identical neural network that is not confined to only process information from one particular sensory input. In fact, in our setup, the inputs to each sensory neuron are not defined. Instead, each neuron must figure out the meaning of its input signal by paying attention to the inputs received by the other sensory neurons, rather than explicitly assuming a fixed meaning. This encourages the agent to process the entire input as an unordered set, making the system to be permutation invariant to its input. Furthermore, in principle, the agent can use as many sensory neurons as required, thus enabling it to process observations of arbitrary length. Both of these properties will help the agent adapt to sensory substitutions.

Results
We demonstrate the robustness and flexibility of this approach in simpler, state-observation environments, where the observations the agent receives as inputs are low-dimensional vectors holding information about the agent’s states, such as the position or velocity of its components. The agent in the popular Ant locomotion task has a total of 28 inputs with information that includes positions and velocities. We shuffle the order of the input vector several times during a trial and show that the agent is rapidly able to adapt and is still able to walk forward.

In cart-pole, the agent’s goal is to swing up a cart-pole mounted at the center of the cart and balance it upright. Normally the agent sees only five inputs, but we modify the cartpole environment to provide 15 shuffled input signals, 10 of which are pure noise, and the remainder of which are the actual observations from the environment. The agent is still able to perform the task, demonstrating the system’s capacity to work with a large number of inputs and attend only to channels it deems useful. Such flexibility may find useful applications for processing a large unspecified number of signals, most of which are noise, from ill-defined systems.

We also apply this approach to high-dimensional vision-based environments where the observation is a stream of pixel images. Here, we investigate screen-shuffled versions of vision-based RL environments, where each observation frame is divided into a grid of patches, and like a puzzle, the agent must process the patches in a shuffled order to determine a course of action to take. To demonstrate our approach on vision-based tasks, we created a shuffled version of Atari Pong.

Shuffled Pong results. Left: Pong agent trained to play using only 30% of the patches matches performance of Atari opponent. Right: Without extra training, when we give the agent more puzzle pieces, its performance increases.

Here the agent’s input is a variable-length list of patches, so unlike typical RL agents, the agent only gets to “see” a subset of patches from the screen. In the puzzle pong experiment, we pass to the agent a random sample of patches across the screen, which are then fixed through the remainder of the game. We find that we can discard 70% of the patches (at these fixed-random locations) and still train the agent to perform well against the built-in Atari opponent. Interestingly, if we then reveal additional information to the agent (e.g., allowing it access to more image patches), its performance increases, even without additional training. When the agent receives all the patches, in shuffled order, it wins 100% of the time, achieving the same result with agents that are trained while seeing the entire screen.

We find that imposing additional difficulty during training by using unordered observations has additional benefits, such as improving generalization to unseen variations of the task, like when the background of the CarRacing training environment is replaced with a novel image.

Shuffled CarRacing results. The agent has learned to focus its attention (indicated by the highlighted patches) on the road boundaries. Left: Training environment. Right: Test environment with new background.

Conclusion
The permutation invariant neural network agents presented here can handle ill-defined, varying observation spaces. Our agents are robust to observations that contain redundant or noisy information, or observations that are corrupt and incomplete. We believe that permutation invariant systems open up numerous possibilities in reinforcement learning.

If you’re interested to learn more about this work, we invite readers to read our interactive article (pdf version) or watch our video. We also released code to reproduce our experiments.



1Quoted in Livewired, by David Eagleman.  

Source: Google AI Blog


Chrome for iOS Update

Hi, everyone! We've just released Chrome 96 (96.0.4664.53) for iOS: it'll become available on App Store in the next few hours.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Harry Souders

Google Chrome

Beta Channel Update for Desktop

 The Chrome team is excited to announce the promotion of Chrome 97 to the Beta channel for Windows, Mac and Linux. Chrome 97.0.4692.20 contains our usual under-the-hood performance and stability tweaks, but there are also some cool new features to explore - please head to the Chromium blog to learn more!



A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar BommanaGoogle Chrome

A decade in deep learning, and what’s next

Twenty years ago, Google started using machine learning, and 10 years ago, it helped spur rapid progress in AI using deep learning. Jeff Dean and Marian Croak of Google Research take a look at how we’ve innovated on these techniques and applied them in helpful ways, and look ahead to a responsible and inclusive path forward.

Jeff Dean

From research demos to AI that really works

I was first introduced to neural networks — computer systems that roughly imitate how biological brains accomplish tasks — as an undergrad in 1990. I did my senior thesis on using parallel computation to train neural networks. In those early days, I thought if we could 32X more compute power (using 32 processors at the time!), we could get neural networks to do impressive things. I was way off. It turns out we would need about 1 million times as much computational power before neural networks could scale to real-world problems.

A decade later, as an early employee at Google, I became reacquainted with machine learning when the company was still just a startup. In 2001 we used a simpler version of machine learning, statistical ML, to detect spam and suggest better spellings for people’s web searches. But it would be another decade before we had enough computing power to revive a more computationally-intensive machine learning approach called deep learning. Deep learning uses neural networks with multiple layers (thus the “deep”), so it can learn not just simple statistical patterns, but can learn subtler patterns of patterns — such as what’s in an image or what word was spoken in some audio. One of our first publications in 2012 was on a system that could find patterns among millions of frames from YouTube videos. That meant, of course, that it learned to recognize cats.

To get to the helpful features you use every day — searchable photo albums, suggestions on email replies, language translation, flood alerts, and so on — we needed to make years of breakthroughs on top of breakthroughs, tapping into the best of Google Research in collaboration with the broader research community. Let me give you just a couple examples of how we’ve done this.

A big moment for image recognition

In 2012, a paper wowed the research world for making a huge jump in accuracy on image recognition using deep neural networks, leading to a series of rapid advances by researchers outside and within Google. Further advances led to applications like Google Photos in 2015, letting you search photos by what’s in them. We then developed other deep learning models to help you find addresses in Google Maps, make sense of videos on YouTube, and explore the world around you using Google Lens. Beyond our products, we applied these approaches to health-related problems, such as detecting diabetic retinopathy in 2016, and then cancerous cells in 2017, and breast cancer in 2020. Better understanding of aerial imagery through deep learning let us launch flood forecasting in 2018, now expanded to cover more than 360 million people in 2021. It’s been encouraging to see how helpful these advances in image recognition have been.

Similarly, we’ve used deep learning to accelerate language understanding. With sequence-to-sequence learning in 2014, we began looking at how to understand strings of text using deep learning. This led to neural machine translation in Google Translate in 2016, which was a massive leap in quality, particularly for less prevalent languages. We developed neural language models further for Smart Reply in Gmail in 2017, which made it easier and faster for you to knock through your email, especially on mobile. That same year, Google invented Transformers, leading to BERT in 2018, then T5, and in 2021 MUM, which lets you ask Google much more nuanced questions. And with “sparse” models like GShard, we can dramatically improve on tasks like translation while using less energy.

We’ve driven a similar arc in understanding speech. In 2012, Google used deep neural networks to make major improvements to speech recognition on Android. We kept advancing the state of the art with higher-quality, faster, more efficient speech recognition systems. By 2019, we were able to put the entire neural network on-device so you could get accurate speech recognition even without a connection. And in 2021, we launched Live Translate on the Pixel 6 phone, letting you speak and be translated in 48 languages -- all on-device, while you’re traveling with no Internet.

More invention ahead

As our research goes forward, we’re balancing more immediately applied research with more exploratory fundamental research. So we’re looking at how, for example, AI can aid scientific discovery, with a project like mapping the brain of a fly, which could one day help better understand and treat mental illness in people. We’re also pursuing quantum computing, which will likely take a decade or longer to reach wide-scale applications. This is why we publish nearly1000 papers a year, including around 200 related to responsible AI, and we’ve given over 6500 grants to external researchers over the past decade and a half.

Looking ahead from 2021 to 2031, I'm excited about the next-generation AI systems we can build, and how much more helpful they’ll be. We’re planting the seeds today with new architectures like Pathways, with more to come.

Marian Croak

Minding the gap(s)

As we develop these lines of research and turn them into useful technologies, we’re mindful of the broader societal impact of AI, and especially that technology has not always had an equitable impact. This is personal for me — I care deeply about ensuring that people from all different backgrounds and circumstances have a good experience.

So we’re increasing the depth and rigor of how we review and evaluate our research to ensure we’re developing it responsibly. We’re also scaling up what we learn by inventing new tools to understand and calibrate critical AI systems across Google's products. We’re growing our organization to 200 experts in Responsible AI and Human Centered Technology, and working with hundreds of partners in product, privacy, security, and other teams across Google.

As one example of our work on responsible AI, Google Research began exploring the nascent field of ML fairness in 2016. The teams realized that on top of publishing papers, they could have a greater impact by teaching ML practitioners how to build with fairness in mind, as with the course we launched in 2018. We also started building interactive tools that coders and researchers could use, from the What-If Tool in 2018 to the 2019 launch of our Fairness Indicators tool, all the way to Know Your Data in 2021. All of these are concrete ways that AI developers can test their datasets and models to see what kind of biases and gaps there are, and start to work on mitigations to prevent unfair outcomes.

A principled approach

In fact, fairness is one of the key tenets of our AI Principles. We developed these principles in 2017 and published them in 2018, announcing not only the Principles themselves but a set of responsible AI practices with practical organizational and technical advice from what we’ve learned along the way. I was proud to be involved in the AI Principles review process from early on — I’ve seen firsthand how rigorous the teams at Google are on evaluating the technology we’re developing and deciding how best to deploy it in the real world.

Indeed, there are paths we’ve chosen not to go down — the AI Principles describe a number of areas we avoid. In line with our principles, we’ve taken a very cautious approach on face recognition. We recognize how fraught this area is not only in terms of privacy and surveillance concerns, but also its potential for unfair bias and impacts on historically marginalized groups. I’m glad that we’re taking this so thoughtfully and carefully.

We’re also developing technologies that help engineers apply the AI Principles directly — for example, incorporating privacy design principles. We invented Federated Learning in 2017 as a way to train ML models without your personal data leaving your phone. In 2018 we showed how well this works on Gboard, the free keyboard you can download for your phone — it learns to provide you more useful suggestions, while keeping what you type private on your device.

If you’re curious, you can learn more about all these veins of research, product impact, processes, and external engagement in our 2021 AI Principles Progress Update.

AI by everyone, for everyone

As we look to the decade ahead, it’s incredibly important that AI be built in a way that works well for everyone. That means building as inclusive a team as we can ourselves at Google. It also means ensuring the field as a whole increasingly represents the people whose lives it aims to improve.

I’m proud to lead the Black Leadership Advisory Group (BLAG) at Google. We helped craft and drive programs included in Google’s recent update on racial equity work. For example, we paired up new director-level hires with BLAG members, and the feedback has been really positive, with 80% of respondents saying they'd recommend the program. We’re looking at extending this to other groups, including for Lantinx+ and Asian+ Googlers. We’re holding ourselves accountable as leaders too — we now evaluate all VPs and above at Google on progress on diversity, equity, and inclusion. This is crucial if we’re going to have a more representative set of researchers and engineers building future technologies.

For the broader research and computer science communities, we’re providing a wide variety of grants, programs, and collaborations that we hope will welcome a more representative range of researchers. Our Research Scholar Program, begun in 2021, gave grants to more than 50 universities in 15+ countries — and 43% of the principal investigators identify as part of a group that’s been historically marginalized in tech. Similarly, our exploreCSR and CS Research Mentorship programs support thousands of undergrads from marginalized groups. And we’re partnering with groups like the National Science Foundation on their new Institute for Human-AI Collaborations.

We’re doing everything we can to make AI work well for all people. We’ll not only help ensure products across Google are using the latest practices in responsible AI — we’ll also encourage new products and features that serve those who’ve historically missed out on helpful new technologies. One example is Project Relate, which uses machine learning to help people with speech impairments communicate and use technology more easily. Another is Real Tone, which helps our imaging products like our Pixel phone camera and Google Photos more accurately and beautifully represent a diverse range of skin tones. These are just the start.

We’re excited for what’s ahead in AI, for everyone.

Find food and give back with Google

In Google’s early days, around this time every year, a group of us would run to Costco and buy supplies to take to Bay Area food banks and pantries. It was a grassroots effort that was scrappy and meaningful — and it introduced a lot of Googlers to how rewarding giving back can be. It made me want to learn what more we could do to have an even bigger impact.

Inspired by our small and mighty food donation operation, I became a passionate supporter of Second Harvest Food Bank in Silicon Valley. And with guidance from food assistance experts, we established a dedicated team at Google in 2020 to work on tackling issues of food waste and food insecurity. Too many families are having to make difficult decisions no one should be forced to make: paying rent, bills, healthcare costs — or keeping food on their table. These challenges have only been compounded by the COVID-19 crisis, which has left more than 54 million working Americans struggling to find a meal. That’s nearly 16% of the country.

Google co-founder Larry Page once said "people are starving in the world not because we don't have enough food. It's because we're not yet organized to solve that problem." The United Nations Food and Agriculture Organization (UNFAO) reports that the world produces more than we need to feed every person on this planet. This isn’t a problem of supply, it’s a problem of distribution. And while solving this issue will require work from government, businesses, nonprofits and individuals working together, one way Google can help is to give people easy access to the information they need, when they need it.

Helping people find food pantries

When you look at Google Search trends, you can see that searches for "food bank" and “food pantry” spike during the month of November.

Food banks have always been critical to making sure people have regular access to nutritious food, but the ongoing pandemic has drastically increased their role as a crucial lifeline in so many communities. With the need for their services doubling or even tripling in some areas, we want to make sure that the people who need them most can find them.

That’s why we’ve launched a new initiative to expand the information about food banks and pantries in Google Search and Maps. We’ve augmented existing coverage with data from two initial nonprofit partners: WhyHunger and Hunger Free America, and we’ve added information to make sure people searching for food support can find what they need. These changes are being made directly in Google Maps so food banks, food pantries and soup kitchens can focus on what matters most — getting people food.

Still, some of these locations don’t yet have websites or phone numbers available on Google. So over the last two months, we've worked to update this information in Search and Maps, making 85,000 plus calls to verify local food banks and pantries. These efforts will continue through the holidays.

Mobile image showing Google Search results for the query “food pantry near me.”

We’ve also developed new Google Business Profile features specifically for food banks, pantries and soup kitchens. They can now provide details on their profile, like whether an appointment is needed, if there are eligibility requirements to receive food and what languages are spoken. They can also add information about their services, like whether prepared meals are available or if grocery delivery is an option. Additionally, pantries can specify whether they’re accepting new volunteers or soliciting food or monetary donations.

Helping people access benefits

Beyond working with food pantries, we’re also helping people use Search to find out how to get and use food assistance benefits.

Federal programs like the Supplemental Nutrition and Assistance Program (SNAP) feed more than 40 million Americans each year. We heard from users that information about these programs is often hard to find, especially for people who are using them for the first time. Today, if you search on Google for “SNAP benefits,” or the name of your local SNAP program, you’ll find direct links to each state’s eligibility guidelines and application process, including contact information for local food assistance agencies.

Mobile image showing Google Search results for the query “SNAP benefits,” with details about program eligibility and links to apply for local programs.

Once approved, many people use Electronic Benefits Transfer cards (also known as EBT) to pay for their groceries. Now, if you search for “grocery stores that accept EBT” you can easily find USDA-approved stores that accept this form of payment — saving time and potential confusion.

Supporting hunger relief organizations – and the communities they serve

I’m also proud to announce that we’re contributing financial support as well. Since the COVID-19 crisis began, Googlers have stepped up – giving more than $22 million in personal donations and company-match to hunger relief organizations in the U.S. Today, Google is contributing an additional $2 million in support ($1 million in cash funding and $1 million in donated ads from Google.org) to 20 food banks, pantries and innovative hunger relief organizations across the country.

There is no easy solution to these large-scale challenges that face our communities, but I’m hopeful that increasing access to information about local food support programs and services can help. Our teams are hard at work and committed to building new tools and features that support economic recovery in the U.S. – and around the world – as we weather the COVID-19 crisis. And I personally am really looking forward to getting back to sorting and distributing food with my family at our local food bank.

You can make an impact by volunteering your time, making a donation, using your voice, or a combination of each — there are a number of ways we can all give back. If you need a place to start, you can donate to the largest national network of food banks, Feeding America. Or you can get involved locally: just search for your nearest food pantry on Google and contact them to see what they need. And if you know someone who might need food assistance, you can simply help by sharing resources. Spreading the word not only about what you’re doing to help, but why can make a huge difference.

Helping nonprofits fundraise this season of giving

In 2020, people in the U.S. donated an estimated $2.5 billion on Giving Tuesday alone. To help connect nonprofits with people who are searching for ways to give their time and resources, Google.org will donate $25 million in ads to nonprofits around the world.

These grants are incremental to the baseline $10,000 per month Ad Grant offering and will go to nonprofits focused on humanitarian response, food insecurity and economic recovery. For example, organizations like Direct Relief may use the incremental Ad Grants to attract more donors who are searching on Google for ways to help vulnerable populations, while SCORE may use the grants to connect people looking for ways to volunteer on Google with an opportunity to sign up to be a small business mentor.

Google.org awards over $1 billion in Ad Grants annually to qualifying nonprofits. Last Giving Season, many organizations that received incremental Ad Grants, like Houston Food Bank, more than doubled the donations they raised as compared to similar organizations receiving the baseline Ad Grant. After receiving incremental Ad Grants in 2020, Houston Food Bank saw a fourfold increase in total donations from their campaigns — raising $130,000 in donations in a single month.

“We've had to work with quickness and efficiency to reach out to those who need us most,” said Jessica Dominguez, Annual Giving Manager at Houston Food Bank.“The easiest way for people to donate and find their closest food location is to turn to the web. The Ad Grant gave us the opportunity to reach these people and provide them with the right information.”

In addition to these incremental grants, all eligible organizations may sign up to receive $10,000 per month in Ad Grants and apply for pro bono account support through Google’s Nonprofit Marketing Immersion.

Happy giving!