Use Google Assistant on more devices

Quick launch summary 

You can now use Google Assistant with smart displays and speakers, such as the Nest Hub Max. We previously announced that access to Google Workspace services from Google Assistant is generally available on users’ personal devices. 

You can access Google Workspace services, such as Google Meet or Calendar, using the Google Assistant on more devices, such as the Nest Hub Max.



Admins will need to enable Search and Assistant for these devices in order to ensure users can access Google Workspace data through Assistant. If Admins allow for the home devices, they can also specify if the device will require Voice Match or Face Match to authenticate. 

Getting started 


Rollout pace 


Availability 

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers 

Resources 

News Brief: June updates from the Google News Initiative

Last month, we expanded journalist training in India to combat misinformation, invested in startups growth in Latin America, learned about innovative news projects around the world and more. Read on for June updates.

Combating misinformation in India

In India, DataLEADS, our Google News Initiative training network partner, completed a 35-day virtual roadshow to provide digital verification skills to over 4,000 people. More than 700 organizations took part in workshops focused on tackling misinformation related to COVID-19 vaccines.

Supporting news startups in Latin America

The Google News Initiative Startups Lab is expanding to Spanish-speaking Latin America, in partnership with SembraMedia. Through direct funding and an intensive six-month curriculum, the Lab will help a group of up to 12 early-stage digital news businesses develop financial sustainability and growth. This builds on lessons learned from the Startups Labs in Brazil and North America. 

Last month, we also released a Spanish version of the Google News Initiative Startups Playbook, a guide to building a successful digital news business from scratch.

Engaging with the global news community through Newsgeist

Together with other news industry leaders, we organized a virtual, week-long version of Newsgeist, an opportunity to connect with the global news community to discuss relevant topics, share projects and initiatives and tackle challenging problems facing the news industry together. The event brought together more than 600 journalists, business leaders, tech leaders, academics and others for a discussion about the state and future of the news industry

Collaborating on AI literacy

Over the next six months, 24 international news organizations will take part in acollaborative experiment across Asia Pacific, Europe and the Americas. The program was developed  in partnership with Polis, the London School of Economics and Political Science’s journalism think tank, through JournalismAI, our efforts to strengthen AI literacy within newsrooms, and convene the industry around common challenges and opportunities.

Learning from Innovation Challenge recipients

Building on the Digital News Innovation Fund in Europe, Google News Initiative Innovation Challenges have supported more than 180 projects that inject new ideas into the news industry. Around the world, we’re learning from former Innovation Challenge recipients who are using their funding to drive innovation in news.

  • Word in Black chose Juneteenth, the anniversary of the day the last slaves were freed in the U.S., to launch a new website and newsletter for Black communities in collaboration with theLocal Media Foundation

  • AnyClip combines artificial intelligence and search tools to provide video analytics for content providers. The Israeli startup has raised an additional $47 million to build out its platform and expand business after seeing 600% growth in the last year.

  • Socialbeat is an Italian startup developed through a collaboration between Accenture and Italian publisher SESAAB. With the help of a recent investment, they’ll continue to enhance their AI-powered software platform for aggregation and content selection.

  • The Sicilian Post created the ARIA project, which allows journalists to automatically create illustrative graphics using data. This month, they hosted a workshop to introduce participants to the project at an Italian conference

Using AI to moderate content

The changing legal and political environment in Europe, as well as growing extremism and polarization in society, means that moderation tools are often inadequate for modern journalism. In light of these factors, Wirtualna Polaska built a moderation engine using Google Cloud tools to help ease the burden on content moderators and provide a safe platform for open discussion in Poland. 

Helping European publishers grow their digital revenue

In partnership withWAN-IFRA, we’re launching the 2021-2022 Table Stakes Europe program designed to help European publishers drive digital revenue growth by focusing on putting audiences first. Applications are now open and will operate on a rolling basis. The program is scheduled to begin in December 2021 and will run for nine months.

That’s a wrap for June. Follow along on social and sign up for our newsletter for more updates.

Bell Partners with Google Cloud to Deliver Next-generation Network Experiences for Canadians

Today, Bell Canada and Google Cloud announced a strategic partnership to power Bell’s company-wide digital transformation, enhance its network and IT infrastructure, and enable a more sustainable future.

If you would like to learn more about this news, click through to read the full press release in English and French.

Introducing Mobile Web Certification

In 2018, we launched Google Marketing Platform Partners to provide marketers a network of accredited partners to help them grow their business with our ads and analytics tools. As digital marketing becomes increasingly complex, businesses need help to solve challenges across and beyond our products, such as first-party data solutions, machine learning and more. Today we are expanding that partnership program to go beyond Google Marketing Platform products with the introduction of our first skills-based certification: Mobile Web Certification. This is our first step in a process to support a more comprehensive network of partners to meet your evolving business needs.

As today’s consumers increasingly turn to their phones to get things done, they expect experiences that are fast, seamless and personalized. In fact, a mere 0.1-second decrease in site speed can boost conversion rates by 8%, and our new research shows that 72% of consumers are more likely to be loyal to a brand if they offer a personalized experience. That’s why mobile best practices -- from speed to user experience optimization -- can drive user engagement on mobile sites, improve user sign-in rates and help marketers generate richer data for optimizing return on ad spend.

Partners certified in Mobile Web work with your business objectives to implement improvements to your user experience while helping you drive engagement on your mobile site, increase mobile conversion rates and generate first-party data to support accurate performance measurement. They have passed a rigorous certification and testing protocol, showing mastery of a wide range of mobile services and an ability to help more users convert.

If you have a gap in skills within your own teams or you need an expert third-party perspective to help you prioritize, Partners certified in Mobile Web are here to help. Over the coming months we will be assessing and adding more Mobile Certified Partners, so please check our Partner Gallery if you are looking for help to improve your mobile website experience.

Mobile represents our first step beyond product certifications. We know this is just one area where you're looking for answers and we're committed to finding new ways Certified Partners can support you every step of the way.

How Vicky Fernandez found her passion for leading teams

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.


Today’s post is all about Vicky Fernandez, who shares how she went from one of the very first employees at our office in Buenos Aires to a leader who manages multiple teams.


What’s your role at Google? 

I work within Google’s ad sales business, where I manage the analysis, insights and optimization team for Spanish-speaking Latin America’s largest customers. The team brings together industry experts with specialists on performance, data and measurement solutions. I get to work with very talented people from all across the continent, taking best practices from one market to the other so that our clients thrive.


What does your typical workday look like right now? 

As a manager, I spend a lot of time meeting with my team, as well as collaborating with other project leaders. When meeting one-on-one with my direct reports, we speak about their current challenges and how I can help them. We also follow up on their objectives, projects, careers and check in on their well-being. 


Why did you decide to apply to work at Google? 

I was working for a TV company and looking for a change. I had heard that Google was opening offices in Buenos Aires (this was 15 years ago), so I decided to send them my resume. I knew nothing about digital marketing, so when they called me for interviews, I locked myself at home for a whole weekend and studied. Still,  I was not very confident after my interviews, but I was happy to participate in the process because I met really nice people and had a good time. 


Surprisingly, they called me back to join Google. I feel very proud to be part of this company, and I also feel proud to be part of our customer´s teams. At Google you belong to not only this company, but also thousands of companies that trust us to grow their businesses.


How did the application and interview process go for you?

After sending my resume, I got a phone call with a recruiter and then four on-site interviews, all together the same day. At that time (15 years ago) Google had no offices in Buenos Aires yet, so many people from the U.S. and Mexico came for a week to do interviews in a temporary office they rented. I had no idea who they were, but they were all very nice and approachable. I´m glad I didn't know how important they were because I think I would have been a lot more nervous. 


How would you describe your path to your current role at Google? 

I started at Google supporting small businesses in Spanish-speaking Latin America. After a year or so I moved to support bigger companies in Mexico. (I did this remotely from Argentina, and I used to travel to Mexico a few times a year.)


Then I got the chance to take my first formal leadership role, leading a team dedicated to helping small businesses that use Google Ads solve technical, billing and optimization issues. I loved being a manager and decided that it was my path. After a couple of years growing that team, I moved to a new role to build a different team for big customers. After gaining experience growing the team and improving service levels and efficiency, I recently got the opportunity to manage these three teams together as one team. I feel really excited about it!


Do you have any tips you’d like to share with aspiring Googlers?

Think about the experiences that you would like to share during the interviews related to leadership, teamwork and process improvements. When questions come up, you can share those experiences. If you have success stories to show, try to have some numbers in mind (like growth on sales, efficiency gains, cost reduction, etc.)


What's one thing you wish you could go back and tell yourself before applying? 

Googlers are all very nice! You will have a great time, so focus on enjoying the interviews.


The new Google Cloud region in Delhi NCR is now open



In the past year, Google has worked to surface timely and reliable health information, amplify public health campaigns, and help nonprofits get urgent support to Indians in need. Now, we are continuing to focus on helping India’s businesses accelerate their digital transformation, deepening our commitment to India’s digitization and economic recovery. To support customers and the public sector in India and across Asia Pacific, we’re excited to announce that our new Google Cloud region in Delhi National Capital Region (NCR) is now open. 

Designed to help both Indian and global companies alike build highly available applications for their customers, the Delhi NCR region is our second Google Cloud region in India and 10th to open in Asia Pacific. 


What customers and partners are saying

Navigating this past year has been a challenge for companies as they grapple with changing customers demands and economic uncertainty. Technology has played a critical role, and we’ve been fortunate to partner with and serve people, companies, and government institutions around the world to help them adapt. The Google Cloud region in Delhi NCR will help our customers adapt to new requirements, new opportunities and new ways of working, like we’ve helped so many companies do in the region: 


  • InMobi scaled a personalized AI platform to support 120+ million active users. “With the arrival of the Google Cloud Delhi NCR, InMobi Group sees the opportunity to continue closing the gap between our users and products,” says Mohit Saxena, Co-founder and Group CTO of Inmobi.Glance, especially, has been serving AI-powered personalised content to over 120 million active users. We can’t wait to continue giving them truly meaningful experiences that are speedy, scale well, and are relevant to them, by expanding the use of our current tools working on Google Cloud with the opening of a new region.”

  • Groww now supports a sizable user base. “Google Cloud provides great technology that enables us to build and scale infrastructure to millions of users, and the new Google Cloud region in Delhi NCR will continue to help more businesses and startups in India access powerful cloud-based infrastructure, products and services,” says Neeraj Singh, Co-founder and Chief Technology Officer, Groww.

  • HDFC Bank is positioned for the future. "At HDFC Bank, we are harnessing technology platforms to both run and build the bank. As we progress to be future ready, the objective is to invest in future technologies that give us scale, efficiency and resiliency. Towards this the Google Cloud region in Delhi NCR will enable us to enhance our resiliency and help us in building an active-active design framework for our new generation applications on cloud," says Ramesh Lakshminarayanan, CIO, HDFC Bank.  

  • Dr. Reddy’s Lab built a modern data platform with Google Cloud. “At Dr Reddy’s, we pride ourselves in helping patients regain good health, acting quickly to provide innovative solutions to address patients’ unmet needs and in accelerating access to medicines to people worldwide. Our Google Cloud-powered data platform is helping us realize these objectives and we welcome Google’s investment in the new Delhi NCR region as helping us and other businesses in India make further contributions to our social and economic future,” says Mukesh Rathi, Senior Vice President & CIO, Dr. Reddy’s Laboratories.

  • “To survive the disruption caused by the pandemic and to succeed in the long term, organizations need to become digital natives, so they can be more agile, explore new business models and build new capabilities that boost resilience. A cloud-first strategy plays a key role in enabling businesses to do this,” said Piyush N. Singh, Lead - India market unit & lead - Growth and Strategic Client Relationships, Asia Pacific and Latin America, Accenture. “Harnessing the potential of cloud requires the right data infrastructure and this expansion by Google Cloud will undoubtedly help Indian enterprises in their digital transformation journeys.”


A global network of regions

Delhi NCR joins 25 existing Google Cloud regions connected via our high-performance network, helping customers better serve their users and customers throughout the globe. As the second region in India, customers benefit from improved business continuity planning with distributed, secure infrastructure needed to meet IT and business requirements for disaster recovery, while maintaining data sovereignty. 



With this new region, Google Cloud customers operating in India also benefit from low latency and high performance of their cloud-based workloads and data. Designed for high availability, the region opens with three availability zones to protect against service disruptions, and offers a portfolio of key products, including Compute Engine, App Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, and BigQuery. 


Supporting India’s recovery with training and education

Google and Google Cloud will also continue to support our customers with people and education programs. We’re investing in local talent and the local developer community to help enterprises digitally transform and support economic recovery. 


Through the India Digitization Fund, we expanded our efforts to support India’s recovery from COVID-19—in particular, through programs to support education and small businesses. In addition to expanding internet access, and investments to help start-ups accelerate India’s digital transformation, we’ve grown our Grow with Google efforts. Businesses can access digital tools to maintain business continuity, find resources like quick help videos, and learn digital skills—in both English and in Hindi.


Helping customers build their transformation clouds

Google Cloud is here to support businesses, helping them get smarter with data, deploy faster, connect more easily with people and customers throughout the globe, and protect everything that matters to their businesses. The cloud region in Delhi NCR offers new technology and tools that can be a catalyst for this change. To learn more, visit the Google Cloud locations page, and be sure to watch the region launch event here.

 


Posted by Bikram Singh Bedi, Managing Director, Google Cloud India


Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 92 (92.0.4515.101) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Android 12 Beta 3 for TV is now available

Posted by Wolfram Klein, Product Manager, Android TV OS

Alongside today’s Android 12 Beta 3 release for mobile, we’re also bringing the third Beta of Android 12 to Android TVs. We’re excited to bring new media features, UI improvements, and privacy controls to the experience with Beta 3 while we continue our work of preparing the full release.

Media

At the heart of the TV experience is beautiful and seamless media playback. In the US, users are spending well over 4 hours a day watching media on TV, and are always asking for the highest resolution playback possible. With Android 12, we are releasing three new features to better support ever-improving picture quality.

  • Refresh Rate Switching Settings: For a smoother viewing experience, Android 12 now supports seamless and non-seamless refresh rate switching. Apps can now integrate these settings for playback of content at optimal frame rates. The Match Content Frame Rate user setting has been added to allow users to control this feature, and apps can call Display.getMode to know if a user’s device supports seamless rate switching.
  • Better display mode reporting: We are improving how TV devices report display modes and making hotplugging behavior more consistent. App developers no longer need to use workarounds for accurately detecting display modes or for handling HDMI hotplug events.
  • Tunnel Mode Updates: Updates to Android’s tunnel mode are making it even easier for app developers to support consistent and efficient playback across devices by reducing media processing overhead in the Android Framework.

User Interface

A beautiful media experience needs an equally stunning user interface to match. Android TV brings two new additions to the UI that help developers provide users with a richer visual experience on high performance devices.

  • Background blurs: Background blurring using RenderEffect (for in-app blurs) and WindowManager (for cross-window blurs) can now be used to easily enhance the visual separation of different UI layers.

Example background blur used to separate UI layers.

  • 4K UI support: For added visual fidelity, Android TV OS now officially supports UI rendering at 4k resolution on compatible devices. 4K UI resolution can be tested in the upcoming Android 12 emulator for TV to allow app developers to prepare their app for devices with the higher resolution.

Privacy and Security

With Android 12, we’re continuing to focus on giving users more transparency and control while keeping their devices and data secure. Beta 3 for TV includes many of the new privacy features from the Android framework.

  • Microphone and camera indicators: Users will now see any time apps are accessing the microphone or camera by showing an indicator on the TV screen. For better visibility of recent app accesses to microphone and camera, users can visit their privacy settings on TV.

Microphone and camera indicators showing during a video call. Video credit: Ekaterina Bolovtsova.

  • Microphone and camera toggles: Two new global privacy settings are now available, allowing the user to easily toggle access to the microphone or camera. When those toggles are disabled, apps will be unable to access microphone audio and camera video.

Microphone access toggle in a user’s global privacy settings.

  • Device Attestation: To assure that your application is running on certified and authentic hardware, the Android KeyStore API has been extended to support attestation of basic device properties.

The Android 12 Beta 3 release for TV is available as a system update to ADT-3 devices today. Also available in the coming weeks, you can use the preview version of the Android 12 emulator to test and build your apps for TV. We hope this helps you test your Android TV app implementations for the next generation of devices. To learn more about getting your Android TV app ready, visit our Android TV OS developers page.

We can’t wait to see what you will build with Android 12 on TV!

From Vision to Language: Semi-supervised Learning in Action…at Scale

Supervised learning, the machine learning task of training predictive models using data points with known outcomes (i.e., labeled data), is generally the preferred approach in industry because of its simplicity. However, supervised learning requires accurately labeled data, the collection of which is often labor intensive. In addition, as model efficiency improves with better architectures, algorithms, and hardware (GPUs / TPUs), training large models to achieve better quality becomes more accessible, which, in turn, requires even more labeled data for continued progress.

To mitigate such data acquisition challenges, semi-supervised learning, a machine learning paradigm that combines a small amount of labeled data with a large amount of unlabeled data, has recently seen success with methods such as UDA, SimCLR, and many others. In our previous work, we demonstrated for the first time that a semi-supervised learning approach, Noisy Student, can achieve state-of-the-art performance on ImageNet, a large-scale academic benchmark for image classification, by utilizing many more unlabeled examples.

Inspired by these results, today we are excited to present semi-supervised distillation (SSD), a simplified version of Noisy Student, and demonstrate its successful application to the language domain. We apply SSD to language understanding within the context of Google Search, resulting in high performance gains. This is the first successful instance of semi-supervised learning applied at such a large scale and demonstrates the potential impact of such approaches for production-scale systems.

Noisy Student Training
Prior to our development of Noisy Student, there was a large body of research into semi-supervised learning. In spite of this extensive research, however, such systems typically worked well only in the low-data regime, e.g., CIFAR, SVHN, and 10% ImageNet. When labeled data were abundant, such models were unable to compete with fully supervised learning systems, which prevented semi-supervised approaches from being applied to important applications in production, such as search engines and self-driving cars. This shortcoming motivated our development of Noisy Student Training, a semi-supervised learning approach that worked well in the high-data regime, and at the time achieved state-of-the-art accuracy on ImageNet using 130M additional unlabeled images.

Noisy Student Training has 4 simple steps:

  1. Train a classifier (the teacher) on labeled data.
  2. The teacher then infers pseudo-labels on a much larger unlabeled dataset.
  3. Then, it trains a larger classifier on the combined labeled and pseudo-labeled data, while also adding noise (noisy student).
  4. (Optional) Going back to step 2, the student may be used as a new teacher.
An illustration of Noisy Student Training through four simple steps. We use two types of noise: model noise (DropoutStochastic Depth) and input noise (data augmentation, such as RandAugment).

One can view Noisy Student as a form of self-training, because the model generates pseudo-labels with which it retrains itself to improve performance. A surprising property of Noisy Student Training is that the trained models work extremely well on robustness test sets for which it was not optimized, including ImageNet-A, ImageNet-C, and ImageNet-P. We hypothesize that the noise added during training not only helps with the learning, but also makes the model more robust.

Examples of images that are classified incorrectly by the baseline model, but correctly by Noisy Student. Left: An unmodified image from ImageNet-A. Middle and Right: Images with noise added, selected from ImageNet-C. For more examples including ImageNet-P, please see the paper.

Connections to Knowledge Distillation
Noisy Student is similar to knowledge distillation, which is a process of transferring knowledge from a large model (i.e., the teacher) to a smaller model (the student). The goal of distillation is to improve speed in order to build a model that is fast to run in production without sacrificing much in quality compared to the teacher. The simplest setup for distillation involves a single teacher and uses the same data, but in practice, one can use multiple teachers or a separate dataset for the student.

Simple illustrations of Noisy Student and knowledge distillation.

Unlike Noisy Student, knowledge distillation does not add noise during training (e.g., data augmentation or model regularization) and typically involves a smaller student model. In contrast, one can think of Noisy Student as the process of “knowledge expansion”.

Semi-Supervised Distillation
Another strategy for training production models is to apply Noisy Student training twice: first to get a larger teacher model T’ and then to derive a smaller student S. This approach produces a model that is better than either training with supervised learning or with Noisy Student training alone. Specifically, when applied to the vision domain for a family of EfficientNet models, ranging from EfficientNet-B0 with 5.3M parameters to EfficientNet-B7 with 66M parameters, this strategy achieves much better performance for each given model size (see Table 9 of the Noisy Student paper for more details).

Noisy Student training needs data augmentation, e.g., RandAugment (for vision) or SpecAugment (for speech), to work well. But in certain applications, e.g., natural language processing, such types of input noise are not readily available. For those applications, Noisy Student Training can be simplified to have no noise. In that case, the above two-stage process becomes a simpler method, which we call Semi-Supervised Distillation (SSD). First, the teacher model infers pseudo-labels on the unlabeled dataset from which we then train a new teacher model (T’) that is of equal-or-larger size than the original teacher model. This step, which is essentially self-training, is then followed by knowledge distillation to produce a smaller student model for production.

An illustration of Semi-Supervised Distillation (SSD), a 2-stage process that self-trains an equal-or-larger teacher (T’) before distilling to a student (S).

Improving Search
Having succeeded in the vision domain, an application in the language understanding domain, like Google Search, is a logical next step with broader user impact. In this case, we focus on an important ranking component in Search, which builds on BERT to better understand languages. This task turns out to be well-suited for SSD. Indeed, applying SSD to the ranking component to better understand the relevance of candidate search results to queries achieved one of the highest performance gains among top launches at Search in 2020. Below is an example of a query where the improved model demonstrates better language understanding.

With the implementation of SSD, Search is able to find documents that are more relevant to user queries.

Future Research & Challenges
We have presented a successful instance of semi-supervised distillation (SSD) in the production scale setting of Search. We believe SSD will continue changing the landscape of machine learning usage in the industry from predominantly supervised learning to semi-supervised learning. While our results are promising, there is still much research needed in how to efficiently utilize unlabeled examples in the real world, which is often noisy, and apply them to various domains.

Acknowledgements
Zhenshuai Ding, Yanping Huang, Elizabeth Tucker, Hai Qian, and Steve He contributed immensely to this successful launch. The project would not have succeeded without contributions from members of both the Brain and Search teams: Shuyuan Zhang, Rohan Anil, Zhifeng Chen, Rigel Swavely, Chris Waterson, Avinash Atreya. Thanks to Qizhe Xie and Zihang Dai for feedback on the work. Also, thanks to Quoc Le, Yonghui Wu, Sundeep Tirumalareddy, Alexander Grushetsky, Pandu Nayak for their leadership support.

Source: Google AI Blog


How students built a web app with the potential to help frontline workers

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

Image of Olly and Daniel from GDSC at Wash U.

Image of Olly and Daniel from Google Developer Student Clubs at Wash U.

When Olly Cohen first arrived on campus at Washington University in St. Louis (Wash U), he knew the school was home to many talented and eager developers, just like him. Computer science is one of the most popular majors at Wash U, and graduates often find jobs in the tech industry. With that in mind, Olly was eager to build a community of peers who wanted to take theories learned in the classroom and put them to the test with tangible, real-life projects. So he decided to start his own Google Developer Student Club, a university-based community group for students interested in learning about Google developer technology.

Olly applied to become Google Developer Student Club Lead so he could start his own club with a faculty advisor, host workshops on developer products and platforms, and build projects that would give back to their community.

He didn’t know it at the time, but starting the club would eventually lead him to the most impactful development project of his early career — building a web application with the potential to help front-line healthcare workers in St. Louis, Missouri, during the pandemic.

Growing a community with a mission

The Google Developer Student Club grew quickly. Within the first few months, Olly and the core team signed up 150 members, hosted events with 40 to 60 attendees on average and began working on five different projects. One of the club’s first successful projects, led by Tom Janoski, was building a tool for the visually impaired. The app provides audio translations of visual media like newspapers and sports games.

This success inspired them to focus their projects on social good missions, and in particular helping small businesses in St. Louis. With a clear goal established, the club began to take off, growing to over 250 members managed by 9 core team members. They were soon building 10 different community-focused projects, and attracting the attention of many local leaders, including university officials, professors and organizers.

Building a web app for front-line healthcare workers

As the St. Louis community began to respond to the coronavirus pandemic in early 2020, some of the leaders at Wash U wondered if there was a way to digitally track PPE needs from front-line health care staff at Wash U’s medical center. The Dean of McKelvey School of Engineering reached out to Olly Cohen and his friend Daniel Sosebee to see if the Google Developer Student Club could lend a hand.

The request was sweeping: Build a web application that could potentially work for the clinical staff of Wash U’s academic hospital, Barnes-Jewish Hospital.

So the students got right to work, consulting with Google employees, Wash U computer science professors, an industry software engineer, and an M.D./Ph.D. candidate at the university’s School of Medicine.

With the team assembled, the student developers first created a platform where they could base their solution. Next, they built a simple prototype with a Google Form that linked to Google Sheets, so they could launch a pilot. Lastly, in conjunction with the Google Form, they developed a serverless web application with a form and data portal that could let all staff members easily request new PPE supplies.

In other words, their solution was showing the potential to help medical personnel track PPE shortages in real time digitally, making it easier and faster to identify and gather the resources doctors need right away. A web app built by students poised to make a true difference, now that is what the Google Developer Student Club experience is all about.

Ready to make a difference?

Are you a student who also wants to use technology to make a difference in your community? Click here to learn more about joining or starting a Google Developer Student Club near you.