Small gestures, big impact: Google ATAP’s latest work

Google is dedicated to making tech accessible for everyone, and our hardware innovation division, Google ATAP, is working on this as well. As part of a Jacquard (a connected apparel platform) research project, we at ATAP worked closely with members and advocates of the disability community to understand how advanced wearable technologies, like smart textiles, gesture interfaces and on-device AI can help more people.


Earlier this year, we worked with members of Champions Place, a shared living residence for young adults with disabilities, to better understand why existing technologies sometimes fall short of meeting the full needs of people with mobility and dexterity disabilities.

This research inspired us to use Jacquard technology to create a soft, interactive patch or sleeve that allows people to access digital, health and security services with simple gestures. This woven technology can be worn or positioned on a variety of surfaces and locations, adjusting to the needs of each individual. 


We teamed up with Garrison Redd, a Para powerlifter and advocate in the disability community, to test this new idea. 


Garrison’s feedback has been invaluable, and he’s shared some of his favorite functions. “The selfie option is helpful as far as creativity,” Garrison says. “If I’m in the gym and have the armband on I can capture images from a proper angle for my coach and the training staff, without having to wheel into position, which isn’t ideal. So that does increase my independence, which is important for individuals who have disabilities.” He also pointed out areas where we could improve. “It’s important that the surface can be sensitive to one or two fingers for people who may have more needs than I have.”


We hugely benefited from Garrison’s background and expertise, and implemented his feedback into our work. For example, we’re now developing machine learning models for gesture recognition that adapt over time to each person's unique dexterity. This will allow people with different levels of motor disabilities to use simple gestures to do things like call someone, or order a rideshare service. These might seem like incremental steps forward, but as Garrison says, “It’s the small things that make a difference between being dependent and independent.”

The tale of the Dutch bookstore, the pivot and the Golden Pin

Bookstore Dominicanen can be found in a former Dominican church in the city of Maastricht, a thriving cultural hub and one of the oldest cities in The Netherlands. Before COVID, the bookstore welcomed almost one million visitors a year. They mostly relied on customers visiting in person to shop for a good read or to enjoy a coffee while admiring the store’s vault paintings and the unique 14th century fresco depicting scenes from Thomas Aquinas’ life. And then the pandemic hit.


The Covid-19 pandemic has had a major impact on businesses worldwide but it also sparked creativity and accelerated many businesses' use of digital tools. In The Netherlands for example, 81% of Dutch SMEs made more use of digital tools to stay in touch with their customers during the lockdown and inform them about changes in their services. 


Dominicanen was one such business to respond to the continuously changing circumstances, something that was recognised by Google with the awarding of the Golden Pin Award.


What are the Golden Pin Awards?


In summer 2021, the Google Netherlands team awarded Golden Pin Awards to twelve inspiring entrepreneurs across the country who managed to continue their services during the pandemic with creativity and the smart use of digital resources whilst receiving high user reviews on their Business Profiles on Google Maps and Search. The list of winners was diverse: from agame store to aknife sharpener, and from a fair fashion clothing shop to a brewery cafe

The winners all demonstrated creativity in continuing to offer their services both online and offline, ranging from pop up drive-through restaurants to online tastings and fitting sessions for women’s clothing on YouTube.

Bookshop owner Ton Harmes stands in front of bookcases holding his Golden Pin trophy from Google

Bookstore owner Ton Harmes with his Golden Pin

How did Bookstore Dominicanen pivot?


For the owner, Ton Harmes, the key was perseverance. With the store having to close during the busiest weeks of the year - right in the middle of the holiday season - they had to adapt to survive. Their staff and volunteers immediately started delivering books by foot, bicycle and car. They also set up a takeout counter (offering click and collect online) and when numbers of visitors were limited, they decided to launch a pop-up store: Do(mini)canen.


Online, they invested heavily in their visibility on Google and in their social channels to keep in touch with their customers. Prioritising keeping their Business Profile on Google Maps and Search up to date, they were always able to indicate the changes in shopping times and services. Every change was immediately visible for their customers. They also started streaming book presentations and interviews with writers live on YouTube and communicated a lot more through all social media to keep customers informed.


Ton is convinced that the internet will continue to play a role in their business operations now they have reopened. Speaking to The Keyword he said, ‘We realized that with a million visitors a year, it seems like you don't really need these ‘modern developments’. But when we closed it was suddenly dead quiet in the store. Then we realized how vulnerable we are if we rely solely on in-person customers and that we have to develop our digital channels faster. We now see online, even after corona, playing an increasingly important role in our contacts with customers - for both engagement and actually finding us. Our YouTube channel has received a boost and we keep reaching out to customers through our social channels. The crisis has really caused a change in our mindset.’


The Golden Pin Award has been given a special place in the store and serves as a reminder for the perseverance and pivoting that had to be done. Their ability to continue to conduct business through the worst of it is down to what Ton refers to as ‘Haw Pin’: Hold on.

50 years of film with NFTS and Google Arts & Culture

What do Wallace and Gromit, Blade Runner and We Need to Talk About Kevin all have in common? Answer: they were each made possible by alumni from the prestigious National Film and Television School based in Beaconsfield, UK. 


The National Film and Television School (NFTS) is an internationally respected institution for education and creativity, launching the careers of many directors, producers, cinematographers, animators and more. Many of whom have gone on to become household names, and earn multipleBAFTAs and Oscars, makingNFTS the most awarded film school globally. To celebrate their 50th anniversary, for the first time in the school’s history, online audiences will be able to explore a new digital archive of over 200 graduate films from alumni of the school. 


Alongside the films, audiences can explore a series of stories, curated playlists and articles. As well as newer films are creations from names such as: Nick Park (Wallace and Gromit), Lynne Ramsay (We Need to Talk About Kevin), and Fantastic Beasts director David Yates

A gif featuring a clip of Wallace and Gromit, a black and white 16MM camera, two bears in a forest and a woman holding a flying machines

Introducing the Digital Archive

In collaboration with Google Arts & Culture, NFTS have curated an online Digital Archive that allows users to watch unseen films, hear from experts from the industry and enjoy carefully curated film playlists from the NFTS students.


The NFTS Digital Archive, launching in September, is a curated collection of over 200 graduate films taken from the NFTS vaults, with behind the scenes stills, trailers, original screenplays and recent interviews with NFTS filmmakers including Beeban Kidron (Oranges are not the only Fruit), Adrian Rhodes (Tomorrow Never Dies) and Anthony Chen (Ilo Ilo) plus many more. 

There is so much to learn from looking behind the scenes, hearing the filmmakers voice and analyzing each film scene by scene. Dr Jon Wardle
Director NFTS

The NFTS at 50

The NFTS Digital Archive is launching as part of the NFTS’ 50th Anniversary celebrations. To help mark the occasion, the release of NFTS at 50 is the result of over two years of work to digitize hundreds of tapes from the archive, bring together alumni for interviews, and bring it all together for audiences around the world to explore on Google Arts & Culture. 


The NFTS at 50 season, taking place at the British Film Institute in London throughout September, is a rare chance for audiences to see work from early on  in the careers of some of the most distinctive and successful voices from the NFTS including Roger Deakins (Blade Runner) and Clio Barnard (The Selfish Giant).


How to enjoy it online

You can access the entire archive now at:g.co/NFTS and discover stories and over 200 films celebrating the NFTS and their incredible half century. 


If you prefer a more guided experience you can explore films by theme, such as Films of Friendship that includes the animated Sleeping with Fishes or from Love Island to Love Gym the fitness dating show. If you are short on time, why not check out 4 shorts all under 10 minutes: Damned, The Alan Dimension, After and A Love Story?


So sit back and enjoy your very own Film Festival and explore more on the Google Arts & Culture app for iOS and Android.

A picture of a man in a driving helmet from Group B starring Richard Madden
10:25
Two polar bears around a fire (animation)
10:25
An illustration of cartoon characters in a yellow car from the film Anna Spud
10:25

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 94 (94.0.4606.31) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Find and share GIFs in Google Chat

What’s changing

We’re introducing a new integration with Tenor, which will allow users to quickly search for and send GIFs in Google Chat on web.

Additionally, we’re introducing a new setting for Admins to control the use of this GIF integration within their organization.

See below for more information.


Who’s impacted

Admins and end users


Why you’d use it

The new Tenor integration makes it faster and easier for users to find and share GIFs when interacting with colleagues.  


Easily find and share GIFs
We’ve made improvements to the way users find and share GIFs in Chat. You’ll notice a new “GIF” icon, which will bring up an expanded window for browsing GIFs. 


Here, users  can find GIFs through search or by filtering on popular categories such as “Trending”. 


Admins can enable or disable this GIF integration in the Admin console by going to Apps > Google Workspace > Settings for Google Chat > GIFs. This integration is enabled by default for all customers. Visit the Help Center to learn more about turning GIFs on or off in Google Chat.


Getting started

  • Admins: Important note: Admins will see this setting beginning September 1, 2021, allowing one week to configure the setting before the end user experience begins rolling out. Visit the Help Center to learn more about turning GIFs on or off for your organization.
  • End users: When enabled by your admin, you can find and insert GIFs by selecting the “GIF” icon in the Google Chat compose bar.

Rollout pace

Admin setting

End user experience 

Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers

Beta Channel Update for Desktop

The Beta channel has been updated to 94.0.4606.31 for Windows and linux, 94.0.4606.30 for Mac.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.




Prudhvikumar Bommana
Google Chrome

Detecting Abnormal Chest X-rays using Deep Learning

The adoption of machine learning (ML) for medical imaging applications presents an exciting opportunity to improve the availability, latency, accuracy, and consistency of chest X-ray (CXR) image interpretation. Indeed, a plethora of algorithms have already been developed to detect specific conditions, such as lung cancer, tuberculosis and pneumothorax. By virtue of being trained to detect a specific disease, however, the utility of these algorithms may be limited in a general clinical setting, where a wide variety of abnormalities could surface. For example, a pneumothorax detector is not expected to highlight nodules suggestive of cancer, and a tuberculosis detector may not identify findings specific to pneumonia. Since an initial triaging step is to determine whether a CXR contains concerning abnormalities, a general-purpose algorithm that identifies X-rays containing any sort of abnormality could significantly facilitate the workflow. However, developing a classifier to do this is challenging due to the ​​wide variety of abnormal findings that present on CXRs.

In “Deep Learning for Distinguishing Normal versus Abnormal Chest Radiographs and Generalization to Two Unseen Diseases Tuberculosis and COVID-19”, published in Scientific Reports, we present a model that can distinguish between normal and abnormal CXRs across multiple de-identified datasets and settings. We find that the model performs well on general abnormalities, as well as unseen examples of tuberculosis and COVID-19. We are also releasing our set of radiologists’ labels1 for the test set used in this study for the publicly available ChestX-ray14 dataset.

A Deep Learning System for Detecting Abnormal Chest X-rays
The deep learning system we used is based on the EfficientNet-B7 architecture, pre-trained on ImageNet. We trained the model using over 200,000 de-identified CXRs from the Apollo Hospitals in India. Each CXR was assigned a label of either “normal” or “abnormal” using a regular expression–based natural language processing approach on the associated radiology reports.

To evaluate how well the system generalizes to new patient populations, we compared its performance on two datasets consisting of a wide spectrum of abnormalities: the test split from the Apollo Hospitals dataset (DS-1), and the publicly available ChestX-ray14 (CXR-14). The labels for these two test sets were annotated for the purposes of this project by a group of US board-certified radiologists. The system achieved areas under the receiver operating characteristic curve (AUROC) of 0.87 on DS-1 and 0.94 on CXR-14 (higher is better).

Though the evaluations on DS-1 and CXR-14 contained a wide range of abnormalities, a possible use-case would be to utilize such an abnormality detector in novel or unforeseen settings with diseases that it had not encountered before. To evaluate the generalizability of the system to new patient populations and in the presence of diseases not seen in the training set, we used four de-identified datasets from three countries, including two publicly available tuberculosis datasets and two COVID-19 datasets from Northwestern Medicine. The system achieved AUCs of 0.95-0.97 in detecting tuberculosis, and 0.65-0.68 in detecting COVID-19. Because CXRs that are negative for these diseases could still contain other concerning abnormalities, we further evaluated the system for its ability to detect abnormalities more broadly (instead of disease positive vs. negative), finding AUCs of 0.91-0.93 for the tuberculosis dataset, and AUCs of 0.86 for the COVID-19 dataset.

The purpose of multiple evaluations (abnormality detection and disease detection) is the distinction between the two: a given disease can present with a certain abnormality or not; and a certain abnormality can arise from multiple diseases. Our study evaluates for both.

The large drop in performance for COVID-19 is because many cases flagged by the system as “positive” for abnormalities were negative for COVID-19, but nevertheless contained abnormal CXR findings that needed attention. This further highlights the usefulness of abnormality detectors even if disease-specific models are available.

In addition, it’s important to note that there is a difference between generalization to unseen diseases (i.e., tuberculosis and COVID-19) versus generalization to unseen CXR findings (e.g., pleural effusion, consolidation/infiltrate). In this study, we demonstrated the generalizability of the system to unseen diseases but not necessarily unseen CXR findings.

Sample chest X-rays of true and false positives, and true and false negatives for (A) general abnormalities, (B) tuberculosis, and (C) COVID-19. On each CXR, we outline in red the areas on which the model focused to identify abnormalities (i.e., the class activation map), and outline the regions of interest indicated by a radiologist in yellow.

Potential Benefits in the Clinic
To understand the potential utility of the deep learning model in improving clinical workflow, we simulated its use for case prioritization, where abnormal cases are “expedited” ahead of normal cases. In these simulations, the system reduced the turnaround time for abnormal cases by up to 28%. This reprioritization setup could be used to divert complex abnormal cases to cardiothoracic specialist radiologists, enable rapid triage of cases that may need urgent decisions, and provide the opportunity to batch negative CXRs for streamlined review.

Impact of a simulated deep learning model–based prioritization in comparison with random review order for (A) general abnormalities, (B) tuberculosis, and (C) COVID-19. The red bars indicate sequences of abnormal CXRs in red and normal CXRs in pink; a greater density of red towards the left indicates abnormal CXRs are reviewed sooner than normal ones. The histograms indicate the average improvement in turnaround time.

Additionally, we found that the system can be used as a pre-trained model to improve other ML algorithms for chest X-rays, especially when data is limited. For example, we used the normal/abnormal classifier in our recent study to detect pulmonary tuberculosis from chest X-rays. Abnormality and tuberculosis detectors can play a critical role in supporting early diagnosis in regions that lack access to resources like trained radiologists or molecular testing.

Sharing Improved Reference Standard Labels
Much work remains to be done to realize the potential of ML to aid chest X-ray interpretation around the world. In particular, obtaining high-quality labels on de-identified data can be a significant barrier to developing and evaluating ML algorithms in healthcare. To accelerate these efforts, we are expanding upon our previous label release by releasing the labels used in this study for the publicly available ChestX-ray14 dataset. We look forward to future machine learning projects by the community in this space.

AcknowledgementsKey contributors to this project at Google include Zaid Nabulsi, Andrew Sellergren‎, Shahar Jamshy, Charles Lau, Eddie Santos, Atilla P. Kiraly, Wenxing Ye, Jie Yang, Rory Pilgrim, Sahar Kazemzadeh, Jin Yu, Greg S. Corrado, Lily Peng, Krish Eswaran, Daniel Tse, Neeral Beladia, Yun Liu, Po-Hsuan Cameron Chen, Shravya Shetty. Significant contributions and input were also made by radiologist collaborators Sreenivasa Raju Kalidindi, Mozziyar Etemadi, Florencia Garcia Vicente, David Melnick. For the CXR-14 dataset, we thank the NIH Clinical Center for making it publicly available. For tuberculosis data collection, thanks go to Sameer Antani, Stefan Jaeger, Sema Candemir, Zhiyun Xue, Alex Karargyris, George R. Thomas, Pu-Xuan Lu, Yi-Xiang Wang, Michael Bonifant, Ellan Kim, Sonia Qasba, and Jonathan Musco. The authors would also like to acknowledge many members of the Google Health Radiology and labeling software teams, in particular Shruthi Prabhakara, Scott McKinney, and Akib Uddin. Sincere appreciation also goes to the radiologists who enabled this work with their image interpretation and annotation efforts throughout the study; Jonny Wong for coordinating the imaging annotation work; Gavin Bee, Mikhail Fomitchev, Shabir Adeel, Jeff Bertram, and Benedict Noero for data releasing; David F. Steiner, Kunal Nagpal, and Michael D. Howell for providing feedback on the manuscript; Craig Mermel, Lauren Winer, Johnny Luu, Adrienne Welch, Annisah Um'rani, and Ashley Zlatinov for feedback on the blogpost.


1Labels include atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, hernia, other abnormality, and normal vs abnormal. 

Source: Google AI Blog


An easier way to move your App Engine apps to Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Blue header

An easier yet still optional migration

In the previous episode of the Serverless Migration Station video series, developers learned how to containerize their App Engine code for Cloud Run using Docker. While Docker has gained popularity over the past decade, not everyone has containers integrated into their daily development workflow, and some prefer "containerless" solutions but know that containers can be beneficial. Well today's video is just for you, showing how you can still get your apps onto Cloud Run, even If you don't have much experience with Docker, containers, nor Dockerfiles.

App Engine isn't going away as Google has expressed long-term support for legacy runtimes on the platform, so those who prefer source-based deployments can stay where they are so this is an optional migration. Moving to Cloud Run is for those who want to explicitly move to containerization.

Migrating to Cloud Run with Cloud Buildpacks video

So how can apps be containerized without Docker? The answer is buildpacks, an open-source technology that makes it fast and easy for you to create secure, production-ready container images from source code, without a Dockerfile. Google Cloud Buildpacks adheres to the buildpacks open specification and allows users to create images that run on all GCP container platforms: Cloud Run (fully-managed), Anthos, and Google Kubernetes Engine (GKE). If you want to containerize your apps while staying focused on building your solutions and not how to create or maintain Dockerfiles, Cloud Buildpacks is for you.

In the last video, we showed developers how to containerize a Python 2 Cloud NDB app as well as a Python 3 Cloud Datastore app. We targeted those specific implementations because Python 2 users are more likely to be using App Engine's ndb or Cloud NDB to connect with their app's Datastore while Python 3 developers are most likely using Cloud Datastore. Cloud Buildpacks do not support Python 2, so today we're targeting a slightly different audience: Python 2 developers who have migrated from App Engine ndb to Cloud NDB and who have ported their apps to modern Python 3 but now want to containerize them for Cloud Run.

Developers familiar with App Engine know that a default HTTP server is provided by default and started automatically, however if special launch instructions are needed, users can add an entrypoint directive in their app.yaml files, as illustrated below. When those App Engine apps are containerized for Cloud Run, developers must bundle their own server and provide startup instructions, the purpose of the ENTRYPOINT directive in the Dockerfile, also shown below.

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

In this migration, there is no Dockerfile. While Cloud Buildpacks does the heavy-lifting, determining how to package your app into a container, it still needs to be told how to start your service. This is exactly what a Procfile is for, represented by the last file in the image above. As specified, your web server will be launched in the same way as in app.yaml and the Dockerfile above; these config files are deliberately juxtaposed to expose their similarities.

Other than this swapping of configuration files and the expected lack of a .dockerignore file, the Python 3 Cloud NDB app containerized for Cloud Run is nearly identical to the Python 3 Cloud NDB App Engine app we started with. Cloud Run's build-and-deploy command (gcloud run deploy) will use a Dockerfile if present but otherwise selects Cloud Buildpacks to build and deploy the container image. The user experience is the same, only without the time and challenges required to maintain and debug a Dockerfile.

Get started now

If you're considering containerizing your App Engine apps without having to know much about containers or Docker, we recommend you try this migration on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While our content initially focuses on Python users, we hope to one day also cover other legacy runtimes so stay tuned. Containerization may seem foreboding, but the goal is for Cloud Buildpacks and migration resources like this to aid you in your quest to modernize your serverless apps!

This Googler’s team is making shopping more inclusive

There’s a lot to love about online shopping: It’s fast, it’s easy and there are a ton of options to choose from. But there’s one obvious challenge — you can’t try anything on. This is something Google product manager Debbie Biswas noticed, as a tech industry veteran and startup founder herself. “Historically, the fashion industry only celebrates people of a certain size and skin color,” she says. “This was something I wanted to change.”

Debbie grew up in India and moved to the U.S. after she graduated college. “I started a company in the women's apparel space, where I learned to solve user pain points around shopping for clothes, sizing and styling.” While working on her startup, Debbie realized how hard shopping was for women, including herself — the models in the images didn’t show her how something would look on her. 

“When I got an opportunity to work at Google Shopping, I realized I could solve so many of these problems at scale using the best AI/ML tech in the industry,” she says. “As a woman of color, and someone who doesn't conform to the ‘traditional beautiful size,’ I feel very motivated to solve apparel shopping problems for people like me.”

A look at Style AI in action.

This was what Debbie and her team wanted to accomplish with Style AI. Style AI is a Shopping feature that helps people see how a product looks on various types of body styles and offers styling advice. Style AI works by using a machine learning algorithm to look at a specific product and visually understand it. “So if someone searches ‘gingham long sleeve shirt,’ Style AI will look at images of long-sleeved gingham shirts, apply our vision recognition technology and understand things like the pattern and the sleeve length and show users fashions that might interest them.” In order to make sure Style AI was inclusive of all different types of shapes, sizes and skin tones Debbie consulted with Google’s Product Fairness, or ProFair, team. ProFair helps teams at Google apply the AI Principles by investigating fairness issues. Together, they find ways to build inclusive services, strengthen equity in data labels and promote fairness and combat bias in AI. 

ProFair held sessions where everyone involved in the project could look for “fairness issues,” which helped Debbie’s team adjust how they designed Style AI. And there was much to consider. “First, we need to be careful of what data we train a model on. If you tell a machine that a certain size and skin color is what it needs to look for, it will,” Debbie explains. “So as responsible product owners, we need to make sure we train it the right way. Even after this, a machine can make many mistakes unknowingly — for example, not realizing that a certain style can be very offensive in one culture and be totally cool in another.” 

For instance, before launching in countries like India and Brazil, ProFair held local focus groups in collaboration with Google’s Product Inclusion team. Debbie says this helped her team find diverse images and clothing for these specific demographics. Debbie’s — and the entire team’s — ultimate goal is that shoppers will feel like they’re seeing themselves when they look for clothing. “Looking at stock product images does not help you decide on your purchase,” she says. “We just always think about what people told us while we were building Style AI: ‘I want to see the product on someone like me!’”