Monthly Archives: May 2021

Chrome for iOS Update

Hi, everyone! We've just released Chrome 90 (90.0.4430.78) for iOS: it'll become available on App Store in next few hours.


This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Bindu Suvarna

Google Chrome 

Next steps in West Des Moines

Our city updates continue, checking in with our team in West Des Moines. If you missed our first one, you can find out more about what’s going on in Austin here.


There is a lot happening in West Des Moines as we move toward bringing our first customers online later this year. Google Fiber has already started our engineering and planning efforts, and we recently signed a lease for our local West Des Moines offices and retail space. We’ve also started the renovation process in our Valley Junction location (check out what’s coming below) and plan to settle in sometime in early fall.



While we’re waiting to move in, our team is hard at work from home. Our sales team has been talking to apartment and condominium managers across the city to get their buildings ready for Google Fiber. If you are interested in getting your community wired for Google Fiber, please email us.

There are still a few things that have to happen before we can serve customers. Most importantly, the city of West Des Moines recently started construction on their conduit network. Once they’ve completed the first area, we’ll start pulling fiber through it to bring fast, reliable internet to your homes, and testing to make sure everything works the way it should. Once that’s done we’ll be ready to open our doors (and our website) for customers.

So what can you do to get ready? As we shared back in October, local residents can see the city’s build schedule and sign up to connect to the conduit network on the city’s Plant the Speed site. And if you are looking for more information about what’s going on with Google Fiber in West Des Moines, you can also sign up for Google Fiber email updates for the latest news and availability.

Things are happening fast, so stay tuned!

Posted by Rachel Merlo, Government & Community Affairs Manager



~~~

author: Rachel Merlo

title: Government & Community Affairs Manager

category: city_news

An update on our COVID response priorities

 Our teams at Google continue to support the tireless work of hospitals, nonprofits, and public health service providers across the country. Right now, we’re focused on three priority areas: ensuring people can access the latest and most authoritative information; amplifying vital safety and vaccination messages; and providing financial backing for affected communities, health authorities and other organizations.

Providing critical and authoritative information

On all our platforms, we’re taking steps to surface the critical information families and communities need to care for their own health and look after others.

Searches on the COVID-19 vaccine display key information around side effects, effectiveness, and registration details, while treatment-related queries surface guidance from ministry resources

When people ask questions about vaccines on Google Search, they see information panels that display the latest updates on vaccine safety, efficacy and side-effects, plus registration information that directs users to the Co-WIN website. You will also find information about prevention, self-care, and treatment under the Prevention and Treatment tab, in easy-to-understand language sourced from authorised medical sources and the Ministry of Health and Family Welfare. 

On YouTube we’re surfacing authoritative information in a set of playlists, about vaccines, preventing the spread of COVID-19, and facts from experts on COVID-19 care.

Our YouTube India channel features a set of playlists to share tips and information on COVID-19 care 

Testing and vaccination center locations

In addition to showing 2,500 testing centers on Search and Maps, we’re now sharing the locations of over 23,000 vaccination centers nationwide, in English and eight Indian languages. And we’re continuing to work closely with the Ministry of Health and Family Welfare to make more vaccination center information available to users throughout India.

Searching for vaccines in Maps and Search now shows over 23,000 vaccination centers across the country, in English and eight Indian languages

Pilot on hospital beds and medical oxygen availability

We know that some of the most crucial information people are searching for is the availability of hospital beds and access to medical oxygen. To help them find answers more easily, we’re testing a new feature using the Q&A function in Maps that enables people to ask about and share local information on availability of beds and medical oxygen in select locations. As this will be user generated content and not provided by authorised sources, it may be required to verify the accuracy and freshness of the information before utilizing it.

Amplifying vital safety and vaccination messages

As well as providing authoritative answers to queries, we’re using our channels to help extend the reach of health information campaigns. That includes the ‘Get the Facts’ around vaccines campaign, to encourage people to focus on authoritative information and content for vaccines. We’re also surfacing important safety messages through promotions on the Google homepage, Doodles and reminders within our apps and services.

Via the Google Search homepage and reminders within our apps and services, we are reminding people to stay safe and stay masked, and get authoritative information on vaccines

Supporting health authorities, organizations, and affected communities

Since the second wave began, we’ve been running an internal donation campaign to raise funds for nonprofit organizations helping those most in need, including GiveIndia, Charities Aid Foundation India, GOONJ, and United Way of Mumbai. This campaign has raised over $4.6 million (INR 33 crore) to date, and continues to generate much-needed support for relief efforts. 

We recognize that many more nonprofits need donations, and that Indians are eager to help where they can—so we’ve rolled out a COVID Aid campaign on Google Pay, featuring non-profit organizations like GiveIndia, Charities Aid Foundation, Goonj, Save the Children, Seeds, UNICEF India  (National NGOs) and United Way. We want to thank all our Google Pay users who have contributed to these organisations, and we hope this effort will make a difference where it matters most. 

On Google Pay people can contribute funds to non-profit organizations involved in COVID response

As India battles this devastating wave, we’ll keep doing all we can to support the selfless individuals and committed organizations on the front lines of the response. There’s a long way to go—but standing together in solidarity, working together with determination, we can and will turn the tide.  

Posted by the Covid Response team, Google India


Google Workspace Updates Weekly Recap – May 7, 2021

New updates 

There are no new updates to share this week. Please see below for a recap of published announcements. 

Previous announcements 


The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details. 


Google for Education transformation reports window open, available worldwide 
The next reporting window for Google for Education transformation reports is now available for K-12 Google Workspace for Education customers worldwide. | Learn more. 



“Show Editors” provides more context on changes made in Google Docs 
You can now view richer information on the edit history of a particular range of content in Google Docs. | Available to Google Workspace Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, and Education Plus customers. | Learn more.



Improve Google Cloud Search results with Contextual Boost 
We’re adding the ability to boost Google Cloud Search results using the Cloud Search Query API for third party data sources. Contextual boost is one of the key ways used to enable search personalization.| Available to Google Cloud Search customers. | Learn more.



Additional admin controls for Google Voice ring groups 
Admins can now configure a “fixed order” pattern for their ring groups and change the maximum duration a call should ring before proceeding to the “unanswered call” behavior. | Available to all Google Workspace and G Suite customers with Google Voice standard and premier licenses. | Learn more. 



Specify which attributes are available for the Secure LDAP client 
Admins can now specify which attributes they’d like to make available for the LDAP Client, such as system, public and private attributes. | Available to Google Workspace Enterprise Standard, Enterprise Plus, Education Fundamentals, and Education Plus, G Suite Enterprise, and Cloud Identity Premium customers. | Learn more.



More options for customizing a charts line and fill styling in Google Sheets 
We’ve added more for line and fill customization options for series and series items. | Learn more. 



For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

No mountain? No problem: This Googler DIYed a rock wall

As a communications manager, Milan-based Googler Andrea Cristallini knows the importance of connecting with others through shared experiences. So when his partner Silvia suggested they start a sport together, he was excited to bond over something new. In December 2019, they went rock climbing together at a local gym for the first time and then climbed a real mountain soon after. 

“I fell in love with it,” Andrea says. “The movements are similar to a dance. I enjoy the concentration it requires, how it feels when you touch the stone, and the idea of ascending to see what’s up there — and take in the landscape around you.”

Then, two months later, Italy went into lockdown. “We had this new passion and no chance to practice it,” Andrea says. 

Thrilled with their new hobby, they decided to build their own climbing wall at home. “If you can’t go to the mountain, bring the mountain to you,” Andrea says. “I thought why not? It may not work, but we have several weekends ahead and nowhere to go.” 

Like so many others who found ways to pass the extra time at home, Andrea and Silvia turned to Search and YouTube. They read blog posts and watched videos on how to build a DIY training wall. 

They started small, with just one oriented panel leaning against a wall in their apartment and a few climbing holds scattered across it. But as time passed and the end of the pandemic was nowhere in sight, they added a second panel to the first, reinforcing them with support beams. They drilled in more climbing holds in different shapes and sizes. Then they connected the panels with a winch system that allows you to rotate the whole structure in order to increase difficulty and overhang. 

“The biggest challenge was designing it from scratch without experience,” Andrea says. “But I had the passion and the time.” 

Since then, Andrea and Silvia have used the climbing wall nearly every day. “Climbing helps us detach from screens; it’s a great mental break,” Andrea says. “I’m really motivated to get better at climbing, and it also makes me excited about spending more time outside soon.” 

Even after the pandemic is over, Andrea has no plans to take down the wall. "It's part of the house now, and it's very effective for training," he says. "On the contrary, I'll start inviting friends at home to practice!" 

More options for customizing a charts line and fill styling in Google Sheets

Quick Summary 

We’ve added more for line and fill customization options for series and series items. You can now modify: 
  •  Color 
  • Opacity 
  • Line dash styles 
  • Line thickness 
For column-shaped series, we’ve added the ability to add and style borders, a highly requested feature. 



Note: these new options are not available for pie charts, however the ability to change pie slice colors and add borders is already available. 

We hope these new options help you best display important data and create more impactful reports with Sheets. 

Getting started 


Rollout pace 


Availability 

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers

Resources 

Helping protect people from financial fraud in the U.K.

Over the last few years, people in the U.K. have been targeted by increasingly sophisticated scammers on and offline. According to UK Finance, in 2020, total fraud loss was £1.26 billion. Criminal gangs are using multiple malicious methods, including phishing emails, spoof phone calls and texts, shopping scams and impersonation scams, as well as scam advertising on social media and search engines.

Joining efforts to collaborate across industries

Tackling the scale of this problem requires collaboration across government, the financial services industry, the telecommunications industry, the tech industry and law enforcement. To play our part in this effort, we have become the first major technology company to join Stop Scams UK and will develop and share best practices with existing members from financial services and telecoms industries.


We also understand the importance of ensuring people are informed about how to spot the tactics of scammers and avoid falling victim to fraud, which is why we have pledged $5 million in advertising credits to support public awareness campaigns. The ads credits will be offered to cross-industry organisations already campaigning on this issue, as well as government bodies undertaking awareness campaigns.

Strengthening measures to protect U.K. consumers

As well as better equipping people to spot a scam, we know how vital it is to protect people from fraud. Over the next few months we will be developing and rolling out further restrictions to financial services advertising in the UK to protect consumers and legitimate advertisers.

The new measures build on significant work we have done to date to help stop financial scammers in the U.K., working closely with the FCA (Financial Conduct Authority):

  • In early 2020, we worked with the FCA to receive notifications when additions are made to the FCA warning list. There are more than 4,000 websites on the warning list.
  • Over the past year, we introduced several verification processes to learn more about the advertisers and their business operations. During the verification period, we pause advertiser accounts if their advertising or business practices are suspected of causing harm. We are currently requiring all UK financial services advertisers to complete these programs in order to run ads.
  • We updated our unreliable claims policy to restrict the rates of return a firm can advertise and ban the use of terms that make unrealistic promises of large financial return with minimal risk, effort or investment. 
  • We recently undertook a review of user experiences that tend to be targeted in the UK by bad actors and have introduced further restrictions, preventing ads from showing on those searches.

Globally, Google has also introduced new advertiser identity verification and rolled this out across the U.K. beginning earlier this year. Advertisers now need to submit personal legal identification, business incorporation documents or other information that proves who they are and the country in which they operate. This means Google can more effectively determine bad actors in the ecosystem from the start.

Ready to respond to evolving scammer tactics

At Google, thousands of people work around the clock to deliver a safe experience for users, creators, publishers and advertisers. Our teams use a mix of technology, including sophisticated machine learning, and human review to enforce our policies. This combination of technology and talent means policy violations can be spotted and action can be taken to remove bad ads.


Our teams are working hard on this issue because we all want U.K. consumers to feel safe and protected when they are managing their finances. Even as attempts by scammers evolve, we will continue to take strong action and work in partnership with others to help keep consumers safe.


Crisscrossed Captions: Semantic Similarity for Images and Text

The past decade has seen remarkable progress on automatic image captioning, a task in which a computer algorithm creates written descriptions for images. Much of the progress has come through the use of modern deep learning methods developed for both computer vision and natural language processing, combined with large scale datasets that pair images with descriptions created by people. In addition to supporting important practical applications, such as providing descriptions of images for visually impaired people, these datasets also enable investigations into important and exciting research questions about grounding language in visual inputs. For example, learning deep representations for a word like “car”, means using both linguistic and visual contexts.

Image captioning datasets that contain pairs of textual descriptions and their corresponding images, such as MS-COCO and Flickr30k, have been widely used to learn aligned image and text representations and to build captioning models. Unfortunately, these datasets have limited cross-modal associations: images are not paired with other images, captions are only paired with other captions of the same image (also called co-captions), there are image-caption pairs that match but are not labeled as a match, and there are no labels that indicate when an image-caption pair does not match. This undermines research into how inter-modality learning (connecting captions to images, for example) impacts intra-modality tasks (connecting captions to captions or images to images). This is important to address, especially because a fair amount of work on learning from images paired with text is motivated by arguments about how visual elements should inform and improve representations of language.

To address this evaluation gap, we present "Crisscrossed Captions: Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO", which was recently presented at EACL 2021. The Crisscrossed Captions (CxC) dataset extends the development and test splits of MS-COCO with semantic similarity ratings for image-text, text-text and image-image pairs. The rating criteria are based on Semantic Textual Similarity, an existing and widely-adopted measure of semantic relatedness between pairs of short texts, which we extend to include judgments about images as well. In all, CxC contains human-derived semantic similarity ratings for 267,095 pairs (derived from 1,335,475 independent judgments), a massive extension in scale and detail to the 50k original binary pairings in MS-COCO’s development and test splits. We have released CxC’s ratings, along with code to merge CxC with existing MS-COCO data. Anyone familiar with MS-COCO can thus easily enhance their experiments with CxC.

Crisscrossed Captions extends the MS-COCO evaluation sets by adding human-derived semantic similarity ratings for existing image-caption pairs and co-captions (solid lines), and it increases rating density by adding human ratings for new image-caption, caption-caption and image-image pairs (dashed lines).*

Creating the CxC Dataset
If a picture is worth a thousand words, it is likely because there are so many details and relationships between objects that are generally depicted in pictures. We can describe the texture of the fur on a dog, name the logo on the frisbee it is chasing, mention the expression on the face of the person who has just thrown the frisbee, or note the vibrant red on a large leaf in a tree above the person’s head, and so on.

The CxC dataset extends the MS-COCO evaluation splits with graded similarity associations within and across modalities. MS-COCO has five captions for each image, split into 410k training, 25k development, and 25k test captions (for 82k, 5k, 5k images, respectively). An ideal extension would rate every pair in the dataset (caption-caption, image-image, and image-caption), but this is infeasible as it would require obtaining human ratings for billions of pairs.

Given that randomly selected pairs of images and captions are likely to be dissimilar, we came up with a way to select items for human rating that would include at least some new pairs with high expected similarity. To reduce the dependence of the chosen pairs on the models used to find them, we introduce an indirect sampling scheme (depicted below) where we encode images and captions using different encoding methods and compute the similarity between pairs of same modality items, resulting in similarity matrices. Images are encoded using Graph-RISE embeddings, while captions are encoded using two methods — Universal Sentence Encoder (USE) and average bag-of-words (BoW) based on GloVe embeddings. Since each MS-COCO example has five co-captions, we average the co-caption encodings to create a single representation per example, ensuring all caption pairs can be mapped to image pairs (more below on how we select intermodality pairs).

Top: Text similarity matrix (each cell corresponds to a similarity score) constructed using averaged co-caption encodings, so each text entry corresponds to a single image, resulting in a 5k x 5k matrix. Two different text encoding methods were used, but only one text similarity matrix has been shown for simplicity. Bottom: Image similarity matrix for each image in the dataset, resulting in a 5k x 5k matrix.

The next step of the indirect sampling scheme is to use the computed similarities of images for a biased sampling of caption pairs for human rating (and vice versa). For example, we select two captions with high computed similarities from the text similarity matrix, then take each of their images, resulting in a new pair of images that are different in appearance but similar in what they depict based on their descriptions. For example, the captions “A dog looking bashfully to the side” and “A black dog lifts its head to the side to enjoy a breeze” would have a reasonably high model similarity, so the corresponding images of the two dogs in the figure below could be selected for image similarity rating. This step can also start with two images with high computed similarities to yield a new pair of captions. We now have indirectly sampled new intramodal pairs — at least some of which are highly similar — for which we obtain human ratings.

Top: Pairs of images are picked based on their computed caption similarity. Bottom: Pairs of captions are picked based on the computed similarity of the images they describe.

Last, we then use these new intramodal pairs and their human ratings to select new intermodal pairs for human rating. We do this by using existing image-caption pairs to link between modalities. For example, if a caption pair example ij was rated by humans as highly similar, we pick the image from example i and caption from example j to obtain a new intermodal pair for human rating. And again, we use the intramodal pairs with the highest rated similarity for sampling because this includes at least some new pairs with high similarity. Finally, we also add human ratings for all existing intermodal pairs and a large sample of co-captions.

The following table shows examples of semantic image similarity (SIS) and semantic image-text similarity (SITS) pairs corresponding to each rating, with 5 being the most similar and 0 being completely dissimilar.

Examples for each human-derived similarity score (left: 5 to 0, 5 being very similar and 0 being completely dissimilar) of image pairs based on SIS (middle) and SITS (right) tasks. Note that these examples are for illustrative purposes and are not themselves in the CxC dataset.

Evaluation
MS-COCO supports three retrieval tasks:

  1. Given an image, find its matching captions out of all other captions in the evaluation set.
  2. Given a caption, find its corresponding image out of all other images in the evaluation set.
  3. Given a caption, find its other co-captions out of all other captions in the evaluation set.

MS-COCO’s pairs are incomplete because captions created for one image at times apply equally well to another, yet these associations are not captured in the dataset. CxC enhances these existing retrieval tasks with new positive pairs, and it also supports a new image-image retrieval task. With its graded similarity judgements, CxC also makes it possible to measure correlations between model and human rankings. Retrieval metrics in general focus only on positive pairs, while CxC’s correlation scores additionally account for the relative ordering of similarity and include low-scoring items (non-matches). Supporting these evaluations on a common set of images and captions makes them more valuable for understanding inter-modal learning compared to disjoint sets of caption-image, caption-caption, and image-image associations.

We ran a series of experiments to show the utility of CxC’s ratings. For this, we constructed three dual encoder (DE) models using BERT-base as the text encoder and EfficientNet-B4 as the image encoder:

  1. A text-text (DE_T2T) model that uses a shared text encoder for both sides.
  2. An image-text model (DE_I2T) that uses the aforementioned text and image encoders, and includes a layer above the text encoder to match the image encoder output.
  3. A multitask model (DE_I2T+T2T) trained on a weighted combination of text-text and image-text tasks.
CxC retrieval results — a comparison of our text-text (T2T), image-text (I2T) and multitask (I2T+T2T) dual encoder models on all the four retrieval tasks.

From the results on the retrieval tasks, we can see that DE_I2T+T2T (yellow bar) performs better than DE_I2T (red bar) on the image-text and text-image retrieval tasks. Thus, adding the intramodal (text-text) training task helped improve the intermodal (image-text, text-image) performance. As for the other two intramodal tasks (text-text and image-image), DE_I2T+T2T shows strong, balanced performance on both of them.

CxC correlation results for the same models shown above.

For the correlation tasks, DE_I2T performs the best on SIS and DE_I2T+T2T is the best overall. The correlation scores also show that DE_I2T performs well only on images: it has the highest SIS but has much worse STS. Adding the text-text loss to DE_I2T training (DE_I2T+T2T) produces more balanced overall performance.

The CxC dataset provides a much more complete set of relationships between and among images and captions than the raw MS-COCO image-caption pairs. The new ratings have been released and further details are in our paper. We hope to encourage the research community to push the state of the art on the tasks introduced by CxC with better models for jointly learning inter- and intra-modal representations.

Acknowledgments
The core team includes Daniel Cer, Yinfei Yang and Austin Waters. We thank Julia Hockenmaier for her inputs on CxC’s formulation, the Google Data Compute Team, especially Ashwin Kakarla and Mohd Majeed for their tooling and annotation support, Yuan Zhang, Eugene Ie for their comments on the initial versions of the paper and Daphne Luong for executive support for the data collection.


  *All the images in the article have been taken from the Open Images dataset under the CC-by 4.0 license. 

Source: Google AI Blog


Apply now for the 2021 NTEN Digital Inclusion Fellowship

Since the first cohort of NTEN’s Digital Inclusion Fellowship in 2015, Google Fiber has supported this innovative program which supports nonprofit professionals with deep connections to digitally distressed communities in their efforts to launch or expand digital inclusion  programs. As a former fellow with Austin Free-Net, I’ve seen firsthand the impact that a focused staff member can have on connecting more people to the skills and resources people need to navigate our digital world. 



The pandemic has emphasized the necessity of access to fast, reliable internet. Organizations that once considered digital equity and literacy tangential to their mission now find it essential to helping the communities they serve. 

Stephanie De Leon, our Digital Inclusion Fellow with AVANCE-Austin, shared, “I can proudly say that all 240 families served in the AVANCE-Austin Parent-Child Education Program received brand new tablets with internet connectivity and resources plus 1-on-1 training on how to use them to help close the gap in digital inequity.”

Over the past six years, Google Fiber has funded 59 fellows across the country, and these extraordinary individuals and organizations are making a huge difference in the everyday lives of their constituents. 

Kayla Bradshaw, a fellow with the United Way of Utah County in Provo, Utah, said: “Typically, we hold in-person trainings for our volunteer income tax assistance (VITA) program. Most of our volunteers are elderly and do not want to meet in person. We developed an online training distributed through YouTube to train all volunteers. The volunteers then receive technical support through the tax season as they assist low-income families in filing their taxes.”

Think a staff member in your organization could benefit from being a Digital Inclusion Fellow? Applications are open now for the next cohort (lucky number 7!). We hope you’ll consider joining the group, or pass this on to an organization that needs their own fellow. 

Posted by Daniel Lucio, Community Impact Manager




~~~~

Author: Daniel Lucio

Title: Community Impact Manager

category: community_impact

New safety section in Google Play will give transparency into how apps use data

Posted by Suzanne Frey, VP, Product, Android Security and Privacy

Blog header

We work closely with developers to keep Google Play a safe, trusted space for billions of people to enjoy the latest Android apps. Today, we’re pre-announcing an upcoming safety section in Google Play that will help people understand the data an app collects or shares, if that data is secured, and additional details that impact privacy and security.

Developers agree that people should have transparency and control over their data. And they want simple ways to communicate app safety that are easy to understand and help users to make informed choices about how their data is handled. Developers also want to give additional context to explain data use and how safety practices could affect the app experience. So in addition to the data an app collects or shares, we’re introducing new elements to highlight whether:

  1. The app has security practices, like data encryption
  2. The app follows our Families policy
  3. The app needs this data to function or if users have choice in sharing it
  4. The app’s safety section is verified by an independent third-party
  5. The app enables users to request data deletion, if they decide to uninstall

This can be a big change, so we’re sharing this in advance and building with developers alongside us.

What this section will include

Among other things, we’ll ask developers to share:

  • What type of data is collected and stored: Examples of potential options are approximate or precise location, contacts, personal information (e.g. name, email address), photos & videos, audio files, and storage files
  • How the data is used: Examples of potential options are app functionality and personalization

Similar to app details like screenshots and descriptions, developers are responsible for the information disclosed in their section. Google Play will introduce a policy that requires developers to provide accurate information. If we find that a developer has misrepresented the data they’ve provided and is in violation of the policy, we will require the developer to fix it. Apps that don’t become compliant will be subject to policy enforcement.

What you can expect

All apps on Google Play - including Google's own apps - will be required to share this information and provide a privacy policy.

We’re committed to ensuring that developers have plenty of time to prepare. This summer, we’ll share the new policy requirements and resources, including detailed guidance on app privacy policies. Starting Q2 2022, new app submissions and app updates must include this information.

Timeline

Target Timeline (Dates subject to change)

In the future, we’ll continue providing new ways to simplify control for users and automate more work for developers.

In the meantime, here are some resources to help you design secure & privacy-friendly apps

We’re excited to advance our partnership with developers to make Google Play a trustworthy platform for everyone.


How useful did you find this blog post?

Google Play Logo